Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes
2014-08-01
The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.
A Derivation of the Nonlocal Volume-Averaged Equations for Two-Phase Flow Transport
Directory of Open Access Journals (Sweden)
Gilberto Espinosa-Paredes
2012-01-01
Full Text Available In this paper a detailed derivation of the general transport equations for two-phase systems using a method based on nonlocal volume averaging is presented. The local volume averaging equations are commonly applied in nuclear reactor system for optimal design and safe operation. Unfortunately, these equations are limited to length-scale restriction and according with the theory of the averaging volume method, these fail in transition of the flow patterns and boundaries between two-phase flow and solid, which produce rapid changes in the physical properties and void fraction. The non-local volume averaging equations derived in this work contain new terms related with non-local transport effects due to accumulation, convection diffusion and transport properties for two-phase flow; for instance, they can be applied in the boundary between a two-phase flow and a solid phase, or in the boundary of the transition region of two-phase flows where the local volume averaging equations fail.
Derivation of a volume-averaged neutron diffusion equation; Atomos para el desarrollo de Mexico
Energy Technology Data Exchange (ETDEWEB)
Vazquez R, R.; Espinosa P, G. [UAM-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Mexico D.F. 09340 (Mexico); Morales S, Jaime B. [UNAM, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, Paseo Cuauhnahuac 8532, Jiutepec, Morelos 62550 (Mexico)]. e-mail: rvr@xanum.uam.mx
2008-07-01
This paper presents a general theoretical analysis of the problem of neutron motion in a nuclear reactor, where large variations on neutron cross sections normally preclude the use of the classical neutron diffusion equation. A volume-averaged neutron diffusion equation is derived which includes correction terms to diffusion and nuclear reaction effects. A method is presented to determine closure-relationships for the volume-averaged neutron diffusion equation (e.g., effective neutron diffusivity). In order to describe the distribution of neutrons in a highly heterogeneous configuration, it was necessary to extend the classical neutron diffusion equation. Thus, the volume averaged diffusion equation include two corrections factor: the first correction is related with the absorption process of the neutron and the second correction is a contribution to the neutron diffusion, both parameters are related to neutron effects on the interface of a heterogeneous configuration. (Author)
Volume Averaging Theory (VAT) based modeling and closure evaluation for fin-and-tube heat exchangers
Zhou, Feng; Catton, Ivan
2012-10-01
A fin-and-tube heat exchanger was modeled based on Volume Averaging Theory (VAT) in such a way that the details of the original structure was replaced by their averaged counterparts, so that the VAT based governing equations can be efficiently solved for a wide range of parameters. To complete the VAT based model, proper closure is needed, which is related to a local friction factor and a heat transfer coefficient of a Representative Elementary Volume (REV). The terms in the closure expressions are complex and sometimes relating experimental data to the closure terms is difficult. In this work we use CFD to evaluate the rigorously derived closure terms over one of the selected REVs. The objective is to show how heat exchangers can be modeled as a porous media and how CFD can be used in place of a detailed, often formidable, experimental effort to obtain closure for the model.
Average crossing number of Gaussian and equilateral chains with and without excluded volume
Diesinger, P. M.; Heermann, D. W.
2008-03-01
We study the influence of excluded volume interactions on the behaviour of the mean average crossing number (mACN) for random off-lattice walks. We investigated Gaussian and equilateral off-lattice random walks with and without ellipsoidal excluded volume up to chain lengths of N = 1500 and equilateral random walks on a cubic lattice up to N = 20000. We find that the excluded volume interactions have a strong influence on the behaviour of the local crossing number at very short distances but only a weak one at large distances. This behaviour is the basis of the proof in [ Y. Diao et al., Math. Gen. 36, 11561 (2003); Y. Diao and C. Ernst, Physical and Numerical Models in Knot Theory Including Applications to the Life Sciences] for the dependence of the mean average crossing number on the chain length N. We show that the data is compatible with an Nln(N)-bahaviour for the mACN, even in the case with excluded volume.
Studies concerning average volume flow and waterpacking anomalies in thermal-hydraulics codes
International Nuclear Information System (INIS)
Lyczkowski, R.W.; Ching, J.T.; Mecham, D.C.
1977-01-01
One-dimensional hydrodynamic codes have been observed to exhibit anomalous behavior in the form of non-physical pressure oscillations and spikes. It is our experience that sometimes this anomaloous behavior can result in mass depletion, steam table failure and in severe cases, problem abortion. In addition, these non-physical pressure spikes can result in long running times when small time steps are needed in an attempt to cope with anomalous solution behavior. The source of these pressure spikes has been conjectured to be caused by nonuniform enthalpy distribution or wave reflection off the closed end of a pipe or abrupt changes in pressure history when the fluid changes from subcooled to two-phase conditions. It is demonstrated in this paper that many of the faults can be attributed to inadequate modeling of the average volume flow and the sharp fluid density front crossing a junction. General corrective models are difficult to devise since the causes of the problems touch on the very theoretical bases of the differential field equations and associated solution scheme. For example, the fluid homogeneity assumption and the numerical extrapolation scheme have placed severe restrictions on the capability of a code to adequately model certain physical phenomena involving fluid discontinuities. The need for accurate junction and local properties to describe phenomena internal to a control volume often points to additional lengthy computations that are difficult to justify in terms of computational efficiency. Corrective models that are economical to implement and use are developed. When incorporated into the one-dimensional, homogeneous transient thermal-hydraulic analysis computer code, RELAP4, they help mitigate many of the code's difficulties related to average volume flow and water-packing anomalies. An average volume flow model and a critical density model are presented. Computational improvements due to these models are also demonstrated
Energy Technology Data Exchange (ETDEWEB)
Reimold, M.; Mueller-Schauenburg, W.; Dohmen, B.M.; Bares, R. [Department of Nuclear Medicine, University of Tuebingen, Otfried-Mueller-Strasse 14, 72076, Tuebingen (Germany); Becker, G.A. [Nuclear Medicine, University of Leipzig, Leipzig (Germany); Reischl, G. [Radiopharmacy, University of Tuebingen, Tuebingen (Germany)
2004-04-01
Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in ROI analysis), interindividual averaging may theoretically be beneficial, too. The aim of this study was to investigate the effect of such averaging on the binding potential (BP) calculated with Logan's non-invasive graphical analysis and the ''simplified reference tissue method'' (SRTM) proposed by Lammertsma and Hume, on the basis of simulated and measured positron emission tomography data [{sup 11}C]d-threo-methylphenidate (dMP) and [{sup 11}C]raclopride (RAC) PET. dMP was not quantified with SRTM since the low k {sub 2} (washout rate constant from the first tissue compartment) introduced a high noise sensitivity. Even for considerably different shapes of TAC (dMP PET in parkinsonian patients and healthy controls, [{sup 11}C]raclopride in patients with and without haloperidol medication) and a high variance in the rate constants (e.g. simulated standard deviation of K {sub 1}=25%), the BP obtained from average TAC was close to the mean BP (<5%). However, unfavourably distributed parameters, especially a correlated large variance in two or more parameters, may lead to larger errors. In Monte Carlo simulations, interindividual averaging before quantification reduced the variance from the SRTM (beyond a critical signal to noise ratio) and the bias in Logan's method. Interindividual averaging may further increase accuracy when there is an error term in the reference tissue assumption E=DV {sub 2}-DV ' (DV {sub 2} = distribution volume of the first tissue compartment, DV &apos
Volume averaging: Local and nonlocal closures using a Green’s function approach
Wood, Brian D.; Valdés-Parada, Francisco J.
2013-01-01
Modeling transport phenomena in discretely hierarchical systems can be carried out using any number of upscaling techniques. In this paper, we revisit the method of volume averaging as a technique to pass from a microscopic level of description to a macroscopic one. Our focus is primarily on developing a more consistent and rigorous foundation for the relation between the microscale and averaged levels of description. We have put a particular focus on (1) carefully establishing statistical representations of the length scales used in volume averaging, (2) developing a time-space nonlocal closure scheme with as few assumptions and constraints as are possible, and (3) carefully identifying a sequence of simplifications (in terms of scaling postulates) that explain the conditions for which various upscaled models are valid. Although the approach is general for linear differential equations, we upscale the problem of linear convective diffusion as an example to help keep the discussion from becoming overly abstract. In our efforts, we have also revisited the concept of a closure variable, and explain how closure variables can be based on an integral formulation in terms of Green’s functions. In such a framework, a closure variable then represents the integration (in time and space) of the associated Green’s functions that describe the influence of the average sources over the spatial deviations. The approach using Green’s functions has utility not only in formalizing the method of volume averaging, but by clearly identifying how the method can be extended to transient and time or space nonlocal formulations. In addition to formalizing the upscaling process using Green’s functions, we also discuss the upscaling process itself in some detail to help foster improved understanding of how the process works. Discussion about the role of scaling postulates in the upscaling process is provided, and poised, whenever possible, in terms of measurable properties of (1) the
Davit, Yohan
2013-12-01
A wide variety of techniques have been developed to homogenize transport equations in multiscale and multiphase systems. This has yielded a rich and diverse field, but has also resulted in the emergence of isolated scientific communities and disconnected bodies of literature. Here, our goal is to bridge the gap between formal multiscale asymptotics and the volume averaging theory. We illustrate the methodologies via a simple example application describing a parabolic transport problem and, in so doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points with great care, and reference many fundamental works. © 2013 Elsevier Ltd.
2010-07-01
... Extraction for Vegetable Oil Production Compliance Requirements § 63.2854 How do I determine the weighted... procedures you must use to determine the weighted average volume fraction of HAP in extraction solvent received for use in your vegetable oil production process. By the end of each calendar month following an...
International Nuclear Information System (INIS)
Espinosa-Paredes, Gilberto
2010-01-01
The aim of this paper is to propose a framework to obtain a new formulation for multiphase flow conservation equations without length-scale restrictions, based on the non-local form of the averaged volume conservation equations. The simplification of the local averaging volume of the conservation equations to obtain practical equations is subject to the following length-scale restrictions: d << l << L, where d is the characteristic length of the dispersed phases, l is the characteristic length of the averaging volume, and L is the characteristic length of the physical system. If the foregoing inequality does not hold, or if the scale of the problem of interest is of the order of l, the averaging technique and therefore, the macroscopic theories of multiphase flow should be modified in order to include appropriate considerations and terms in the corresponding equations. In these cases the local form of the averaged volume conservation equations are not appropriate to describe the multiphase system. As an example of the conservation equations without length-scale restrictions, the natural circulation boiling water reactor was consider to study the non-local effects on the thermal-hydraulic core performance during steady-state and transient behaviors, and the results were compared with the classic local averaging volume conservation equations.
Utility-Optimal Dynamic Rate Allocation under Average End-to-End Delay Requirements
Hajiesmaili, Mohammad H.; Talebi, Mohammad Sadegh; Khonsari, Ahmad
2015-01-01
QoS-aware networking applications such as real-time streaming and video surveillance systems require nearly fixed average end-to-end delay over long periods to communicate efficiently, although may tolerate some delay variations in short periods. This variability exhibits complex dynamics that makes rate control of such applications a formidable task. This paper addresses rate allocation for heterogeneous QoS-aware applications that preserves the long-term end-to-end delay constraint while, s...
Fatigue strength of Al7075 notched plates based on the local SED averaged over a control volume
Berto, Filippo; Lazzarin, Paolo
2014-01-01
When pointed V-notches weaken structural components, local stresses are singular and their intensities are expressed in terms of the notch stress intensity factors (NSIFs). These parameters have been widely used for fatigue assessments of welded structures under high cycle fatigue and sharp notches in plates made of brittle materials subjected to static loading. Fine meshes are required to capture the asymptotic stress distributions ahead of the notch tip and evaluate the relevant NSIFs. On the other hand, when the aim is to determine the local Strain Energy Density (SED) averaged in a control volume embracing the point of stress singularity, refined meshes are, not at all, necessary. The SED can be evaluated from nodal displacements and regular coarse meshes provide accurate values for the averaged local SED. In the present contribution, the link between the SED and the NSIFs is discussed by considering some typical welded joints and sharp V-notches. The procedure based on the SED has been also proofed to be useful for determining theoretical stress concentration factors of blunt notches and holes. In the second part of this work an application of the strain energy density to the fatigue assessment of Al7075 notched plates is presented. The experimental data are taken from the recent literature and refer to notched specimens subjected to different shot peening treatments aimed to increase the notch fatigue strength with respect to the parent material.
Water Volume Required to Carve the Martian Valley Networks: Updated Sediment Volume
Rosenberg, E. N.; Head, J. W.; Cassanelli, J.; Palumbo, A.; Weiss, D.
2017-10-01
In order to gain insights into the climate of early Mars, we estimate the volume of water that was required to erode the valley networks (VNs). We update previous results with a new VN cavity volume measurement.
Directory of Open Access Journals (Sweden)
Chieh-Fan Chen
2011-01-01
Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Wood, Brian; He, Xiaoliang; Apte, Sourabh
2017-11-01
Turbulent flows through porous media are encountered in a number of natural and engineered systems. Many attempts to close the Navier-Stokes equation for such type of flow have been made, for example using RANS models and double averaging. On the other hand, Whitaker (1996) applied volume averaging theorem to close the macroscopic N-S equation for low Re flow. In this work, the volume averaging theory is extended into the turbulent flow regime to posit a relationship between the macroscale velocities and the spatial velocity statistics in terms of the spatial averaged velocity only. Rather than developing a Reynolds stress model, we propose a simple algebraic closure, consistent with generalized effective viscosity models (Pope 1975), to represent the spatial fluctuating velocity and pressure respectively. The coefficients (one 1st order, two 2nd order and one 3rd order tensor) of the linear functions depend on averaged velocity and gradient. With the data set from DNS, performed with inertial and turbulent flows (pore Re of 300, 500 and 1000) through a periodic face centered cubic (FCC) unit cell, all the unknown coefficients can be computed and the closure is complete. The macroscopic quantity calculated from the averaging is then compared with DNS data to verify the upscaling. NSF Project Numbers 1336983, 1133363.
MARS code manual volume II: input requirements
International Nuclear Information System (INIS)
Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu
2010-02-01
Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
Directory of Open Access Journals (Sweden)
Okuda Miyuki
2012-09-01
obstructive pulmonary disease was complicated by obstructive sleep apnea syndrome. Conclusion In cases such as this, in which patients with severe acute respiratory failure requiring full-time noninvasive positive pressure ventilation therapy also show sleep-disordered breathing, different ventilator settings must be used for waking and sleeping. On such occasions, the Respironics V60 Ventilator, which is equipped with an average volume-assured pressure support mode, may be useful in improving gas exchange and may achieve good patient compliance, because that mode allows ventilation to be maintained by automatically adjusting the inspiratory force to within an acceptable range whenever ventilation falls below target levels.
Directory of Open Access Journals (Sweden)
Björn eNitzsche
2015-06-01
Full Text Available Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams were acquired on a 1.5T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight, age and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM and white (WM matter as well as cerebrospinal fluid (CSF classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM. Overall, a positive correlation of GM volume and body weight explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.
International Nuclear Information System (INIS)
Hyatt, J.E.
1997-01-01
Hanford Analytical Services Quality Assurance Requirements Document (HASQARD) is issued by the Analytical Services, Program of the Waste Management Division, US Department of Energy (US DOE), Richland Operations Office (DOE-RL). The HASQARD establishes quality requirements in response to DOE Order 5700.6C (DOE 1991b). The HASQARD is designed to meet the needs of DOE-RL for maintaining a consistent level of quality for sampling and field and laboratory analytical services provided by contractor and commercial field and laboratory analytical operations. The HASQARD serves as the quality basis for all sampling and field/laboratory analytical services provided to DOE-RL through the Analytical Services Program of the Waste Management Division in support of Hanford Site environmental cleanup efforts. This includes work performed by contractor and commercial laboratories and covers radiological and nonradiological analyses. The HASQARD applies to field sampling, field analysis, and research and development activities that support work conducted under the Hanford Federal Facility Agreement and Consent Order Tri-Party Agreement and regulatory permit applications and applicable permit requirements described in subsections of this volume. The HASQARD applies to work done to support process chemistry analysis (e.g., ongoing site waste treatment and characterization operations) and research and development projects related to Hanford Site environmental cleanup activities. This ensures a uniform quality umbrella to analytical site activities predicated on the concepts contained in the HASQARD. Using HASQARD will ensure data of known quality and technical defensibility of the methods used to obtain that data. The HASQARD is made up of four volumes: Volume 1, Administrative Requirements; Volume 2, Sampling Technical Requirements; Volume 3, Field Analytical Technical Requirements; and Volume 4, Laboratory Technical Requirements. Volume 1 describes the administrative requirements
International Nuclear Information System (INIS)
Hirata, Akimasa; Takano, Yukinori; Fujiwara, Osamu; Kamimura, Yoshitsugu
2010-01-01
The present study quantified the volume-averaged in situ electric field in nerve tissues of anatomically based numeric Japanese male and female models for exposure to extremely low-frequency electric and magnetic fields. A quasi-static finite-difference time-domain method was applied to analyze this problem. The motivation of our investigation is that the dependence of the electric field induced in nerve tissue on the averaging volume/distance is not clear, while a cubical volume of 5 x 5 x 5 mm 3 or a straight-line segment of 5 mm is suggested in some documents. The influence of non-nerve tissue surrounding nerve tissue is also discussed by considering three algorithms for calculating the averaged in situ electric field in nerve tissue. The computational results obtained herein reveal that the volume-averaged electric field in the nerve tissue decreases with the averaging volume. In addition, the 99th percentile value of the volume-averaged in situ electric field in nerve tissue is more stable than that of the maximal value for different averaging volume. When including non-nerve tissue surrounding nerve tissue in the averaging volume, the resultant in situ electric fields were not so dependent on the averaging volume as compared to the case excluding non-nerve tissue. In situ electric fields averaged over a distance of 5 mm were comparable or larger than that for a 5 x 5 x 5 mm 3 cube depending on the algorithm, nerve tissue considered and exposure scenarios. (note)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2015-04-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla
2016-03-01
The present study aimed to analyze the clinical usefulness of the thinnest cross section of the nerve fibers in the optic nerve head averaged over the circumference of the optic nerve head. 3D volumes of the optic nerve head of the same eye was captured at two different visits spaced in time by 1-4 weeks, in 13 subjects diagnosed with early to moderate glaucoma. At each visit 3 volumes containing the optic nerve head were captured independently with a Topcon OCT- 2000 system. In each volume, the average shortest distance between the inner surface of the retina and the central limit of the pigment epithelium around the optic nerve head circumference, PIMD-Average [02π], was determined semiautomatically. The measurements were analyzed with an analysis of variance for estimation of the variance components for subjects, visits, volumes and semi-automatic measurements of PIMD-Average [0;2π]. It was found that the variance for subjects was on the order of five times the variance for visits, and the variance for visits was on the order of 5 times higher than the variance for volumes. The variance for semi-automatic measurements of PIMD-Average [02π] was 3 orders of magnitude lower than the variance for volumes. A 95 % confidence interval for mean PIMD-Average [02π] was estimated to 1.00 +/-0.13 mm (D.f. = 12). The variance estimates indicate that PIMD-Average [02π] is not suitable for comparison between a onetime estimate in a subject and a population reference interval. Cross-sectional independent group comparisons of PIMD-Average [02π] averaged over subjects will require inconveniently large sample sizes. However, cross-sectional independent group comparison of averages of within subject difference between baseline and follow-up can be made with reasonable sample sizes. Assuming a loss rate of 0.1 PIMD-Average [02π] per year and 4 visits per year it was found that approximately 18 months follow up is required before a significant change of PIMDAverage [02π] can
Factors Impacting Habitable Volume Requirements: Results from the 2011 Habitable Volume Workshop
Simon, M.; Whitmire, A.; Otto, C.; Neubek, D. (Editor)
2011-01-01
This report documents the results of the Habitable Volume Workshop held April 18-21, 2011 in Houston, TX at the Center for Advanced Space Studies-Universities Space Research Association. The workshop was convened by NASA to examine the factors that feed into understanding minimum habitable volume requirements for long duration space missions. While there have been confinement studies and analogs that have provided the basis for the guidance found in current habitability standards, determining the adequacy of the volume for future long duration exploration missions is a more complicated endeavor. It was determined that an improved understanding of the relationship between behavioral and psychosocial stressors, available habitable and net habitable volume, and interior layouts was needed to judge the adequacy of long duration habitat designs. The workshop brought together a multi-disciplinary group of experts from the medical and behavioral sciences, spaceflight, human habitability disciplines and design professionals. These subject matter experts identified the most salient design-related stressors anticipated for a long duration exploration mission. The selected stressors were based on scientific evidence, as well as personal experiences from spaceflight and analogs. They were organized into eight major categories: allocation of space; workspace; general and individual control of environment; sensory deprivation; social monotony; crew composition; physical and medical issues; and contingency readiness. Mitigation strategies for the identified stressors and their subsequent impact to habitat design were identified. Recommendations for future research to address the stressors and mitigating design impacts are presented.
Derivation of the Multi-fluid Model using the Time-Volume Averaging Method in Porous Body
International Nuclear Information System (INIS)
Lee, Sang Yong; Park, Chan Eok; Kim, Eun Kee
2010-01-01
The local instantaneous balance equation reads as, ∂/∂t(ρ k ψ k )+∇·(ρ k ψ k u k )=-∇·J k +ρ k ψ k (1) where ρ k ,ψ k ,u k ,J k and φ k are the density, property of extensive characteristics, velocity, flux and source of k-phase, respectively. Nomenclatures for other variables are found in. Table-1 shows the field variables. e k ,q k ,ρ k ,γ k ,g k ,q k ,M k ,E k are internal energy, heat flux, pressure, vaporization, gravity and internal heat rate, interfacial momentum and energy sources respectively. is the stress tensor and is decomposed into pressure and shear. The time averaged balance equation for any property ψ k of k-phase can be presented as, ∂/∂t(ρ k ψ k )+∇·(ρ k ψ k U k )=-∇·J k +ρ k φ k +I k . I k ≡-I/ΔtΣ j {I/u ni [(ρ k ψ k )n k ·(u k -u i )-(n k ·J k )]} (2) where the bar over a quantity indicates the time-averaging operation.is the surface normal vector in this paper. Considering the time-fluctuating terms, the time-averaged balance equation can be represented by using weighted mean variables as, (J k T : turbulent effectsTkJ): ∂/∂t(ρ k ψ k )+∇·(ρ k ψ k u k )=-∇·(J k +J k T )+ρ k φ k +I k
Rodrigues, Jonathan; Minhas, Kishore; Pieles, Guido; McAlindon, Elisa; Occleshaw, Christopher; Manghat, Nathan; Hamilton, Mark
2016-10-01
The aim of this study was to quantify the degree of the effect of in-plane partial volume averaging on recorded peak velocity in phase contrast magnetic resonance angiography (PCMRA). Using cardiac optimized 1.5 Tesla MRI scanners (Siemens Symphony and Avanto), 145 flow measurements (14 anatomical locations; ventricular outlets, aortic valve (AorV), aorta (5 sites), pulmonary arteries (3 sites), pulmonary veins, superior and inferior vena cava)- in 37 subjects (consisting of healthy volunteers, congenital and acquired heart disease patients) were analyzed by Siemens Argus default voxel averaging technique (where peak velocity = mean of highest velocity voxel and four neighbouring voxels) and by single voxel technique (1.3×1.3×5 or 1.7×1.7×5.5 mm 3 ) (where peak velocity = highest velocity voxel only). The effect of scan protocol (breath hold versus free breathing) and scanner type (Siemens Symphony versus Siemens Avanto) were also assessed. Statistical significance was defined as P<0.05. There was a significant mean increase in peak velocity of 7.1% when single voxel technique was used compared to voxel averaging (P<0.0001). Significant increases in peak velocity were observed by single voxel technique compared to voxel averaging regardless of subject type, anatomical flow location, scanner type and breathing command. Disabling voxel averaging did not affect the volume of flow recorded. Reducing spatial resolution by the use of voxel averaging produces a significant underestimation of peak velocity. While this is of itself not surprising this is the first report to quantify the size of the effect. When PCMRA is used to assess peak velocity recording pixel averaging should be disabled.
Hanley, G.
1978-01-01
Volume 6 of the SPS Concept Definition Study is presented and also incorporates results of NASA/MSFC in-house effort. This volume includes a supporting research and technology summary. Other volumes of the final report that provide additional detail are as follows: (1) Executive Summary; (2) SPS System Requirements; (3) SPS Concept Evolution; (4) SPS Point Design Definition; (5) Transportation and Operations Analysis; and Volume 7, SPS Program Plan and Economic Analysis.
Subsurface contamination focus area technical requirements. Volume II
International Nuclear Information System (INIS)
Nickelson, D.; Nonte, J.; Richardson, J.
1996-10-01
This is our vision, a vision that replaces the ad hoc or open-quotes delphiclose quotes method which is to get a group of open-quotes expertsclose quotes together and make decisions based upon opinion. To fulfill our vision for the Subsurface Contaminants Focus Area (SCFA), it is necessary to generate technical requirements or performance measures which are quantitative or measurable. Decisions can be supported if they are based upon requirements or performance measures which can be traced to the origin (documented) and are verifiable, i.e., prove that requirements are satisfied by inspection (show me), demonstration, analysis, monitoring, or test. The data from which these requirements are derived must also reflect the characteristics of individual landfills or plumes so that technologies that meet these requirements will necessarily work at specific sites. Other subjective factors, such as stakeholder concerns, do influence decisions. Using the requirements as a basic approach, the SCFA can depend upon objective criteria to help influence the areas of subjectivity, like the stakeholders. In the past, traceable requirements were not generated, probably because it seemed too difficult to do so. There are risks that the requirements approach will not be accepted because it is new and represents a departure from the historical paradigm
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Tank Farms Technical Safety Requirements. Volume 1 and 2
International Nuclear Information System (INIS)
CASH, R.J.
2000-01-01
The Technical Safety Requirements (TSRs) define the acceptable conditions, safe boundaries, basis thereof, and controls to ensure safe operation during authorized activities, for facilities within the scope of the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR)
Directory of Open Access Journals (Sweden)
Yao-Ching Wang
Full Text Available Respiratory motion causes uncertainties in tumor edges on either computed tomography (CT or positron emission tomography (PET images and causes misalignment when registering PET and CT images. This phenomenon may cause radiation oncologists to delineate tumor volume inaccurately in radiotherapy treatment planning. The purpose of this study was to analyze radiology applications using interpolated average CT (IACT as attenuation correction (AC to diminish the occurrence of this scenario. Thirteen non-small cell lung cancer patients were recruited for the present comparison study. Each patient had full-inspiration, full-expiration CT images and free breathing PET images by an integrated PET/CT scan. IACT for AC in PET(IACT was used to reduce the PET/CT misalignment. The standardized uptake value (SUV correction with a low radiation dose was applied, and its tumor volume delineation was compared to those from HCT/PET(HCT. The misalignment between the PET(IACT and IACT was reduced when compared to the difference between PET(HCT and HCT. The range of tumor motion was from 4 to 17 mm in the patient cohort. For HCT and PET(HCT, correction was from 72% to 91%, while for IACT and PET(IACT, correction was from 73% to 93% (*p<0.0001. The maximum and minimum differences in SUVmax were 0.18% and 27.27% for PET(HCT and PET(IACT, respectively. The largest percentage differences in the tumor volumes between HCT/PET and IACT/PET were observed in tumors located in the lowest lobe of the lung. Internal tumor volume defined by functional information using IACT/PET(IACT fusion images for lung cancer would reduce the inaccuracy of tumor delineation in radiation therapy planning.
40 CFR 80.1129 - Requirements for separating RINs from volumes of renewable fuel.
2010-07-01
... volumes of renewable fuel. 80.1129 Section 80.1129 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Renewable Fuel Standard § 80.1129 Requirements for separating RINs from volumes of renewable fuel. (a)(1) Separation of a...
Daugherty, Ronald D.; And Others
This final volume of a four-volume study considers the need for personnel for traffic control, police traffic services, pedestrian safety, school bus safety, and debris hazard control and cleanup. Training requirements to meet national objectives are discussed, in terms of curriculum, staffing, student recruitment, facilities, equipment and…
2010-04-01
... otherwise required to compute tax in accordance with Â§ 5c.1256-3. 5c.1305-1 Section 5c.1305-1 Internal... INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging rules for taxpayers otherwise required to compute tax in accordance with § 5c.1256-3. (a) In general. If...
Poitout-Belissent, Florence; Aulbach, Adam; Tripathi, Niraj; Ramaiah, Lila
2016-12-01
In preclinical safety assessment, blood volume requirements for various endpoints pose a major challenge. The goal of this working group was to review current practices for clinical pathology (CP) testing in preclinical toxicologic studies, and to discuss advantages and disadvantages of methods for reducing blood volume requirements. An industry-wide survey was conducted to gather information on CP instrumentation and blood collection practices for hematology, clinical biochemistry, and coagulation evaluation in laboratory animals involved in preclinical studies. Based on the survey results and collective experience of the authors, the working group proposes the following "points to consider" for CP testing: (1) For most commercial analyzers, 0.5 mL and 0.8 mL of whole blood are sufficient for hematology and biochemistry evaluation, respectively. (2) Small analyzers with low volume requirements and low throughput have limited utility in preclinical studies. (3) Sample pooling or dilution is inappropriate for many CP methods. (4) Appropriate collection sites should be determined based on blood volume requirements and technical expertise. (5) Microsampling does not provide sufficient volume given current analyzer and quality assurance requirements. (6) Study design considerations include: the use of older/larger animals (rodents), collection of CP samples before toxicokinetic samples, use of separate subsets of mice for hematology and clinical biochemistry testing, use of a priority list for clinical biochemistry, and when possible, eliminating coagulation testing. © 2016 American Society for Veterinary Clinical Pathology.
International Nuclear Information System (INIS)
Barraclough, B; Li, J; Liu, C; Yan, G
2014-01-01
Purpose: Fourier-based deconvolution approaches used to eliminate ion chamber volume averaging effect (VAE) suffer from measurement noise. This work aims to investigate a novel method to account for ion chamber VAE through convolution in a commercial treatment planning system (TPS). Methods: Beam profiles of various field sizes and depths of an Elekta Synergy were collected with a finite size ion chamber (CC13) to derive a clinically acceptable beam model for a commercial TPS (Pinnacle 3 ), following the vendor-recommended modeling process. The TPS-calculated profiles were then externally convolved with a Gaussian function representing the chamber (σ = chamber radius). The agreement between the convolved profiles and measured profiles was evaluated with a one dimensional Gamma analysis (1%/1mm) as an objective function for optimization. TPS beam model parameters for focal and extra-focal sources were optimized and loaded back into the TPS for new calculation. This process was repeated until the objective function converged using a Simplex optimization method. Planar dose of 30 IMRT beams were calculated with both the clinical and the re-optimized beam models and compared with MapCHEC™ measurements to evaluate the new beam model. Results: After re-optimization, the two orthogonal source sizes for the focal source reduced from 0.20/0.16 cm to 0.01/0.01 cm, which were the minimal allowed values in Pinnacle. No significant change in the parameters for the extra-focal source was observed. With the re-optimized beam model, average Gamma passing rate for the 30 IMRT beams increased from 92.1% to 99.5% with a 3%/3mm criterion and from 82.6% to 97.2% with a 2%/2mm criterion. Conclusion: We proposed a novel method to account for ion chamber VAE in a commercial TPS through convolution. The reoptimized beam model, with VAE accounted for through a reliable and easy-to-implement convolution and optimization approach, outperforms the original beam model in standard IMRT QA
Iizaka, Shinji; Kaitani, Toshiko; Nakagami, Gojiro; Sugama, Junko; Sanada, Hiromi
2015-11-01
Adequate nutritional intake is essential for pressure ulcer healing. Recently, the estimated energy requirement (30 kcal/kg) and the average protein requirement (0.95 g/kg) necessary to maintain metabolic balance have been reported. The purpose was to evaluate the clinical validity of these requirements in older hospitalized patients with pressure ulcers by assessing nutritional status and wound healing. This multicenter prospective study carried out as a secondary analysis of a clinical trial included 194 patients with pressure ulcers aged ≥65 years from 29 institutions. Nutritional status including anthropometry and biochemical tests, and wound status by a structured severity tool, were evaluated over 3 weeks. Energy and protein intake were determined from medical records on a typical day and dichotomized by meeting the estimated average requirement. Longitudinal data were analyzed with a multivariate mixed-effects model. Meeting the energy requirement was associated with changes in weight (P wound healing for deep ulcers (P = 0.013 for both), improving exudates and necrotic tissue, but not for superficial ulcers. Estimated energy requirement and average protein requirement were clinically validated for prevention of nutritional decline and of impaired healing of deep pressure ulcers. © 2014 Japan Geriatrics Society.
Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team
2017-11-01
We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).
Requirement for an intact cytoskeleton for volume regulation in boar spermatozoa.
Petrunkina, A M; Hebel, M; Waberski, D; Weitze, K F; Töpfer-Petersen, E
2004-01-01
Osmotically induced cell swelling triggers a chain of events leading to a net loss of major cell ions and water, resulting in cell volume recovery, a process known as regulatory volume decrease (RVD). In many cell types, there is an evidence that the cytoskeleton may play a role in the initial sensing and transduction of the signal of volume change. In this study, we tested the hypothesis that an intact microfilament and microtubule network is required for volume response and RVD in boar sperm before and after capacitation treatment and whether addition of cytochalasin D and colchicine to the capacitation medium would affect volumetric behaviour. Capacitation is a series of cellular and molecular alterations that enable the spermatozoon to fertilize an oocyte. Cell volume measurements of washed sperm suspensions were performed electronically in Hepes-buffered saline solutions of 300 and 180 mosmol/kg. After exposure to hypoosmotic conditions, boar sperm showed initial swelling (up to 150% of initial volume within 5 min), which was subsequently partially reversed (to about 120-130% after 20 min). Treatment with cytochalasin D led to reduced initial swelling (1 micromol/l) and loss of RVD in washed sperm (1-10 micromol/l) and at the beginning of incubation under capacitating conditions (5 micromol/l). Short treatment with 500 micromol/l colchicine affected the volume regulatory ability in sperm under capacitating conditions but not in washed sperm. No significant differences in cell volume response were observed after subsequent addition of cytochalasin D and colchicine to the suspensions of sperm incubated for 3 h under capacitating conditions. However, the incubation under capacitating conditions in the presence of cytochalasin D led to improved volume regulation at the end of the incubation period (23%). The microfilament network appears to be important for volume regulation in washed boar spermatozoa while intact microtubules do not seem to be necessary for
Smith, J. H.
1980-01-01
Average hourly and daily total insolation estimates for 235 United States locations are presented. Values are presented for a selected number of array tilt angles on a monthly basis. All units are in kilowatt hours per square meter.
Energy Technology Data Exchange (ETDEWEB)
1994-04-01
The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 6) outlines the standards and requirements for the sections on: Environmental Restoration and Waste Management, Research and Development and Experimental Activities, and Nuclear Safety.
International Nuclear Information System (INIS)
1994-04-01
The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 6) outlines the standards and requirements for the sections on: Environmental Restoration and Waste Management, Research and Development and Experimental Activities, and Nuclear Safety
Energy Technology Data Exchange (ETDEWEB)
1994-04-01
The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Document (S/RID) is contained in multiple volumes. This document (Volume 2) presents the standards and requirements for the following sections: Quality Assurance, Training and Qualification, Emergency Planning and Preparedness, and Construction.
International Nuclear Information System (INIS)
1994-04-01
The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 4) presents the standards and requirements for the following sections: Radiation Protection and Operations
International Nuclear Information System (INIS)
1994-04-01
The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 5) outlines the standards and requirements for the Fire Protection and Packaging and Transportation sections
International Nuclear Information System (INIS)
1994-04-01
The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Document (S/RID) is contained in multiple volumes. This document (Volume 2) presents the standards and requirements for the following sections: Quality Assurance, Training and Qualification, Emergency Planning and Preparedness, and Construction
Energy Technology Data Exchange (ETDEWEB)
1994-04-01
The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 4) presents the standards and requirements for the following sections: Radiation Protection and Operations.
2013-05-13
... Revocation of TSCA Section 4 Testing Requirements for One High Production Volume Chemical Substance AGENCY... Production Volume (HPV) chemical substance, benzenesulfonic acid, [[4-[[4-(phenylamino)phenyl][4-(phenylimino)-2,5- cyclohexadien-1-ylidene]methyl]phenyl]amino]- (CAS No. 1324-76-1), also known as C.I. Pigment...
Tidal Volume Requirement in Mechanically Ventilated Infants with Meconium Aspiration Syndrome.
Sharma, Saumya; Clark, Shane; Abubakar, Kabir; Keszler, Martin
2015-08-01
The aim of the study is to test the hypothesis that increased physiologic dead space and functional residual capacity seen in meconium aspiration syndrome (MAS) results in higher tidal volume (VT) requirement to achieve adequate ventilation. Retrospective review of infants with MAS admitted to our hospital from 2000 to 2010 managed with conventional ventilation. Demographics, ventilator settings, VT, respiratory rate (RR), and blood gas values were recorded. Minute ventilation (MV) was calculated as RR × VT. Only VT values with corresponding partial pressure of carbon dioxide (Paco 2) between 35 and 60 mm Hg were included. Mean VT/kg and MV/kg were calculated for each patient. Forty infants ventilated for lung disease other than MAS or pulmonary hypoplasia served as controls. Birth weights of the 28 MAS patients and 40 control infants were similar (3,330 ± 500 g and 3,300 ± 640 g). Two patients in each group required extracorporeal membrane oxygenation. Infants with MAS required 26% higher VT and 42% higher MV compared with controls to maintain equal Paco 2. Infants with MAS require larger VT and higher total MV to achieve similar alveolar ventilation, consistent with pathophysiology of MAS. Our findings provide the first reference data to guide selection of VT in infants with MAS. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Development of an Inline Dry Power Inhaler That Requires Low Air Volume.
Farkas, Dale; Hindle, Michael; Longest, P Worth
2017-12-20
Inline dry powder inhalers (DPIs) are actuated by an external air source and have distinct advantages for delivering aerosols to infants and children, and to individuals with compromised lung function or who require ventilator support. However, current inline DPIs either perform poorly, are difficult to operate, and/or require large volumes (∼1 L) of air. The objective of this study was to develop and characterize a new inline DPI for aerosolizing spray-dried formulations with powder masses of 10 mg and higher using a dispersion air volume of 10 mL per actuation that is easy to load (capsule-based) and operate. Primary features of the new low air volume (LV) DPIs are fixed hollow capillaries that both pierce the capsule and provide a continuous flow path for air and aerosol passing through the device. Two different configurations were evaluated, which were a straight-through (ST) device, with the inlet and outlet capillaries on opposite ends of the capsule, and a single-sided (SS) device, with both the inlet and outlet capillaries on the same side of the capsule. The devices were operated with five actuations of a 10 mL air syringe using an albuterol sulfate (AS) excipient-enhanced growth (EEG) formulation. Device emptying and aerosol characteristics were evaluated for multiple device outlet configurations. Each device had specific advantages. The best case ST device produced the smallest aerosol [mean mass median aerodynamic diameter (MMAD) = 1.57 μm; fine particle fraction <5 μm (FPF <5μm ) = 95.2%)] but the mean emitted dose (ED) was 61.9%. The best case SS device improved ED (84.8%), but produced a larger aerosol (MMAD = 2.13 μm; FPF <5μm = 89.3%) that was marginally higher than the initial deaggregation target. The new LV-DPIs produced an acceptable high-quality aerosol with only 10 mL of dispersion air per actuation and were easy to load and operate. This performance should enable application in high and low flow
Puckett, L.J.
1991-01-01
Ion concentrations were generally less variable within storms compared with net ion input data. Concentrations and net inputs of some ions were consistently less variable than others; for example, Ca2+, NO3-, and SO42- were less variable than NH4 + and K+. These patterns of variability were consistent in comparisons both within and among storms. The relatively low variability of NO3- and SO42- is probably due to dry deposition of these ions as anthropogenic pollutants, while the low variability of Ca2+ is the result of deposition in windblown soil particles. The high variability of NH4+ and K+ is probably the result of biological processes. Ammonium is strongly retained by the canopy, and K+ is readily leached from it. Retention by, and leaching from, the canopy can induce spatial variability as a result of spatial heterogeneity in the biota. Throughfall volume also displayed low variability within and among events, requiring an average of 11 collectors to estimate the mean within 10% at the 95% confidence level. -from Author
Study of space shuttle EVA/IVA support requirements. Volume 1: Technical summary report
Copeland, R. J.; Wood, P. W., Jr.; Cox, R. L.
1973-01-01
Results are summarized which were obtained for equipment requirements for the space shuttle EVA/IVA pressure suit, life support system, mobility aids, vehicle support provisions, and energy 4 support. An initial study of tasks, guidelines, and constraints and a special task on the impact of a 10 psia orbiter cabin atmosphere are included. Supporting studies not related exclusively to any one group of equipment requirements are also summarized. Representative EVA/IVA task scenarios were defined based on an evaluation of missions and payloads. Analysis of the scenarios resulted in a total of 788 EVA/IVA's in the 1979-1990 time frame, for an average of 1.3 per shuttle flight. Duration was estimated to be under 4 hours on 98% of the EVA/IVA's, and distance from the airlock was determined to be 70 feet or less 96% of the time. Payload water vapor sensitivity was estimated to be significant on 9%-17% of the flights. Further analysis of the scenarios was carried out to determine specific equipment characteristics, such as suit cycle and mobility requirements.
2012-05-14
... Withdrawal of Revocation of TSCA Section 4 Testing Requirements for One High Production Volume Chemical...]amino]- (CAS No. 1324-76-1), also known as C.I. Pigment Blue 61. EPA received an adverse comment regarding C.I. Pigment Blue 61. This document withdraws the revocation of testing requirements for C.I...
2012-05-14
... Revocation of TSCA Section 4 Testing Requirements for One High Production Volume Chemical Substance AGENCY...]- (CAS No. 1324-76-1), also known as C.I. Pigment Blue 61. EPA is basing its decision to take this action... is proposing to amend the TSCA section 4(a) chemical testing requirements for one high production...
Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.
Directory of Open Access Journals (Sweden)
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
Caltrans Average Annual Daily Traffic Volumes (2004)
California Environmental Health Tracking Program — [ from http://www.ehib.org/cma/topic.jsp?topic_key=79 ] Traffic exhaust pollutants include compounds such as carbon monoxide, nitrogen oxides, particulates (fine...
Demitri, Nevine; Zoubir, Abdelhak M
2017-01-01
Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.
Weighted south-wide average pulpwood prices
James E. Granskog; Kevin D. Growther
1991-01-01
Weighted average prices provide a more accurate representation of regional pulpwood price trends when production volumes valy widely by state. Unweighted South-wide average delivered prices for pulpwood, as reported by Timber Mart-South, were compared to average annual prices weighted by each state's pulpwood production from 1977 to 1986. Weighted average prices...
Lagrangian averaging with geodesic mean
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler-α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Fusion Engineering Device. Volume V. Technology R and D requirements for construction
International Nuclear Information System (INIS)
1981-10-01
This volume covers the following areas: (1) nuclear systems, (2) auxiliary heating, (3) magnet systems, (4) remote maintenance, (5) fueling, (6) diagnostics, instrumentation, information and control, and (7) safety and environment
40 CFR 80.1429 - Requirements for separating RINs from volumes of renewable fuel.
2010-07-01
... or biogas for which RINs have been generated in accordance with § 80.1426(f) must separate any RINs that have been assigned to that volume of renewable electricity or biogas if: (i) The party designates the electricity or biogas as transportation fuel; and (ii) The electricity or biogas is used as...
Fusion Engineering Device. Volume IV. Physics basis and physics R and D requirements
International Nuclear Information System (INIS)
1981-10-01
This volume covers the following issues: (1) confinement scaling, (2) cross section shaping, limits on B and q, (3) ion cyclotron heating, (4) neutral beam heating, (5) mechanical pump limiter, (6) poloidal divertor, and (7) non-divertor active impurity control
Energy Technology Data Exchange (ETDEWEB)
Chapman, G.A.; Buevens, W.R.
1982-06-01
The requirements of infrastructure and community services necessary to accommodate the development of geothermal energy on the Island of Hawaii for electricity production are identified. The following aspects are covered: Puna District-1981, labor resources, geothermal development scenarios, geothermal land use, the impact of geothermal development on Puna, labor resource requirments, and the requirements for government activity.
Averaging in cosmological models
Coley, Alan
2010-01-01
The averaging problem in cosmology is of considerable importance for the correct interpretation of cosmological data. We review cosmological observations and discuss some of the issues regarding averaging. We present a precise definition of a cosmological model and a rigorous mathematical definition of averaging, based entirely in terms of scalar invariants.
Irsik, Debra L; Blazer-Yost, Bonnie L; Staruschenko, Alexander; Brands, Michael W
2017-06-01
Despite the effects of insulinopenia in type 1 diabetes and evidence that insulin stimulates multiple renal sodium transporters, it is not known whether normal variation in plasma insulin regulates sodium homeostasis physiologically. This study tested whether the normal postprandial increase in plasma insulin significantly attenuates renal sodium and volume losses. Rats were instrumented with chronic artery and vein catheters, housed in metabolic cages, and connected to hydraulic swivels. Measurements of urine volume and sodium excretion (UNaV) over 24 h and the 4-h postprandial period were made in control (C) rats and insulin-clamped (IC) rats in which the postprandial increase in insulin was prevented. Twenty-four-hour urine volume (36 ± 3 vs. 15 ± 2 ml/day) and UNaV (3.0 ± 0.2 vs. 2.5 ± 0.2 mmol/day) were greater in the IC compared with C rats, respectively. Four hours after rats were given a gel meal, blood glucose and urine volume were greater in IC rats, but UNaV decreased. To simulate a meal while controlling blood glucose, C and IC rats received a glucose bolus that yielded peak increases in blood glucose that were not different between groups. Urine volume (9.7 ± 0.7 vs. 6.0 ± 0.8 ml/4 h) and UNaV (0.50 ± 0.08 vs. 0.20 ± 0.06 mmol/4 h) were greater in the IC vs. C rats, respectively, over the 4-h test. These data demonstrate that the normal increase in circulating insulin in response to hyperglycemia may be required to prevent excessive renal sodium and volume losses and suggest that insulin may be a physiological regulator of sodium balance. Copyright © 2017 the American Physiological Society.
Energy Technology Data Exchange (ETDEWEB)
Burt, D.L.
1994-04-01
The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 7) presents the standards and requirements for the following sections: Occupational Safety and Health, and Environmental Protection.
International Nuclear Information System (INIS)
Burt, D.L.
1994-04-01
The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 7) presents the standards and requirements for the following sections: Occupational Safety and Health, and Environmental Protection
Transaction-based building controls framework, Volume 2: Platform descriptive model and requirements
Energy Technology Data Exchange (ETDEWEB)
Akyol, Bora A. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Haack, Jereme N. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Carpenter, Brandon J. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Lutes, Robert G. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Hernandez, George [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
2015-07-31
Transaction-based Building Controls (TBC) offer a control systems platform that provides an agent execution environment that meets the growing requirements for security, resource utilization, and reliability. This report outlines the requirements for a platform to meet these needs and describes an illustrative/exemplary implementation.
Space station needs, attributes and architectural options study. Volume 3: Mission requirements
1983-04-01
User missions that are enabled or enhanced by a manned space station are identified. The mission capability requirements imposed on the space station by these users are delineated. The accommodation facilities, equipment, and functional requirements necessary to achieve these capabilities are identified, and the economic, performance, and social benefits which accrue from the space station are defined.
RELAP5/MOD3 code manual: User`s guide and input requirements. Volume 2
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-08-01
The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. Volume II contains detailed instructions for code application and input data preparation.
Southwest Project: resource/institutional requirements analysis. Volume I. Executive summary
Energy Technology Data Exchange (ETDEWEB)
1979-12-01
This project provides information which could be used by DOE in formulating their plans for commercialization and market penetration of central station solar electric generating plants in the southwestern region of the United States. The area of interest includes Arizona, California, Colorado, Nevada, New Mexico, Utah, and sections of Oklahoma and Texas. The project evaluated the potential integration of central station solar electric generating facilities into the existing electric grids of the region through the year 2000 by making use of system planning methodology which is commonly used throughout the electric utility industry. The technologies included: wind energy conversion, solar thermal electric, solar photovoltaic conversion, and hybrid (solar thermal repowering) solar electric systems. The participants in this project included 12 electric utility companies and a state power authority in the southwestern United States as well as a major consulting engineering firm. A broad synopsis of information found in Volumes II, III, and IV is presented. (MCW)
1984-01-01
The electroepitaxial process and the Very Large Scale Integration (VLSI) circuits (chips) facilities were chosen because each requires a very high degree of automation, and therefore involved extensive use of teleoperators, robotics, process mechanization, and artificial intelligence. Both cover a raw materials process and a sophisticated multi-step process and are therfore highly representative of the kinds of difficult operation, maintenance, and repair challenges which can be expected for any type of space manufacturing facility. Generic areas were identified which will require significant further study. The initial design will be based on terrestrial state-of-the-art hard automation. One hundred candidate missions were evaluated on the basis of automation portential and availability of meaning ful knowldege. The design requirements and unconstrained design concepts developed for the two missions are presented.
Strategic Requirements for the Army to the Year 2000. Volume V. Middle East and Southwest Asia.
1982-11-01
development terms, it seems unavoidable for political stability. Egypt’s economic burdens seem overwhelming . Population growth rates remain high, the...887-0200 £2 Middle East. and Southwest Asia 83 03 09 049 6 Prepared for: cThe Department of the Army OFFICE OF THE DEPUTY CHIEF OF STAFF OPERATIONS...REQUIREMENTS FOR THE ARMY TO THE YEAR 2000 MIDDLE EAST AND SOUTHWEST ASIA Center for Strategic and International Studies Georgetown University I,. TNrT.AqATFT M
Johnson, Kenneth L.; White, K. Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...
McKinley, Todd O; McCarroll, Tyler; Gaski, Greg E; Frantz, Travis L; Zarzaur, Ben L; Terry, Colin; Steenburg, Scott D
2016-02-01
Multiply injured patients (MIPs) in hemorrhagic shock develop oxygen debt which causes organ dysfunction and can lead to death. We developed a noninvasive patient-specific index, Shock Volume (SV), to quantify the magnitude of hypoperfusion. SV integrates the magnitude and duration that incremental shock index values are elevated above known thresholds of hypoperfusion using serial individual vital sign data. SV can be monitored in real time to assess ongoing hypoperfusion. The goal of this study was to determine how SV corresponded to transfusion requirements and organ dysfunction in a retrospective cohort of 74 MIPs. We measured SV in 6-h increments for 48 h after injury in multiply injured adults (18-65; Injury Severity Score ≥18). Patients who had accumulated 40 units of SV within 6 h of injury and 100 units of SV within 12 h of injury were at high risk for requiring massive transfusion or multiple critical administration transfusions. SV measurements were equally sensitive and specific as compared with base deficit values in predicting transfusions. SV measurements at 6 h after injury stratified patients at risk for multiple organ failure determined by Denver scores. In addition, SV values corresponded to the magnitude of organ failure determined by Sequential Organ Failure Assessment scores. SV is a patient-specific index that can be quantified in real time in critically injured patients. It is a surrogate for cumulative hypoperfusion and it predicts high-volume transfusions and organ dysfunction.
Space transfer vehicle concepts and requirements. Volume 4: Summary of special studies
1993-09-01
Our final report for Phase 1 addressed the future space transportation needs and requirements based on the current assets, at the time, and their evolution through technology/advanced development using a path and schedule that supported the world leadership role of the United States in a responsible and realistic financial forecast. Always, and foremost, the recommendations placed high values on the safety and success of missions both manned and unmanned through a total quality management philosophy at Martin Marietta. The second phase of the STV contract involved the use of Technical Directives (TD) to provide short-term support for specialized tasks as required by the COTR. Three of these tasks were performed in parallel with Phase 1. These tasks were the Liquid Acquisition Experiment (LACE), Liquid Reorientation Experiment (LIRE), and Expert System for Design, Operation, and Technology Studies (ESDOTS). The results of these TD's were reported in conjunction with the Phase 1 Final Report. Cost analysis of existing launch systems has demonstrated a need for a new upper stage that will increase America's competitiveness in the global launch services market. To provide a growth path of future exploration class STV's, near-term low-cost upper stages featuring modularity, portability, scalability, and evolvability must be developed. These recommendations define a program that: leverages ongoing activities to establish a new development environment, develop technologies that benefit the entire life cycle of a system, and result in a scalable hardware platform that provides a growth path to future upper stages.
Corporate Data Network (CDN) data requirements task. Enterprise Model. Volume 1
International Nuclear Information System (INIS)
1985-11-01
The NRC has initiated a multi-year program to centralize its information processing in a Corporate Data Network (CDN). The new information processing environment will include shared databases, telecommunications, office automation tools, and state-of-the-art software. Touche Ross and Company was contracted with to perform a general data requirements analysis for shared databases and to develop a preliminary plan for implementation of the CDN concept. The Enterprise Model (Vol. 1) provided the NRC with agency-wide information requirements in the form of data entities and organizational demand patterns as the basis for clustering the entities into logical groups. The Data Dictionary (Vol. 2) provided the NRC with definitions and example attributes and properties for each entity. The Data Model (Vol. 3) defined logical databases and entity relationships within and between databases. The Preliminary Strategic Data Plan (Vol. 4) prioritized the development of databases and included a workplan and approach for implementation of the shared database component of the Corporate Data Network
Corporate Data Network (CDN). Data Requirements Task. Preliminary Strategic Data Plan. Volume 4
International Nuclear Information System (INIS)
1985-11-01
The NRC has initiated a multi-year program to centralize its information processing in a Corporate Data Network (CDN). The new information processing environment will include shared databases, telecommunications, office automation tools, and state-of-the-art software. Touche Ross and Company was contracted with to perform a general data requirements analysis for shared databases and to develop a preliminary plan for implementation of the CDN concept. The Enterprise Model (Vol. 1) provided the NRC with agency-wide information requirements in the form of data entities and organizational demand patterns as the basis for clustering the entities into logical groups. The Data Dictionary (Vol.2) provided the NRC with definitions and example attributes and properties for each entity. The Data Model (Vol.3) defined logical databases and entity relationships within and between databases. The Preliminary Strategic Data Plan (Vol. 4) prioritized the development of databases and included a workplan and approach for implementation of the shared database component of the Corporate Data Network
Energy Technology Data Exchange (ETDEWEB)
None
1978-06-01
Studies leading to the development of two 400 MW Offshore Thermal Energy Conversion Commercial Plants are presented. This volume includes a summary of three tasks: task IIA--systems evaluation and requirements; task IIB--evaluation plan; task III--technology review; and task IV--systems integration evaluation. Task IIA includes the definition of top level requirements and an assessment of factors critical to the selection of hull configuration and size, quantification of payload requirements and characteristics, and sensitivity of system characteristics to site selection. Task IIB includes development of a methodology for systematically evaluating the candidate hullforms, based on interrelationships and priorities developed during task IIA. Task III includes the assessment of current technology and identification of deficiencies in relation to OTEC requirements and the development of plans to correct such deficiencies. Task IV involves the formal evaluation of the six candidate hullforms in relation to sit and plant capacity to quantify cost/size/capability relationships, leading to selection of an optimum commercial plant. (WHK)
Investigation of EUV tapeout flow issues, requirements, and options for volume manufacturing
Cobb, Jonathan; Jang, Sunghoon; Ser, Junghoon; Kim, Insung; Yeap, Johnny; Lucas, Kevin; Do, Munhoe; Kim, Young-Chang
2011-04-01
Although technical issues remain to be resolved, EUV lithography is now a serious contender for critical layer patterning of upcoming 2X node memory and 14nm Logic technologies in manufacturing. If improvements continue in defectivity, throughput and resolution, then EUV lithography appears that it will be the most extendable and the cost-effective manufacturing lithography solution for sub-78nm pitch complex patterns. EUV lithography will be able to provide a significant relaxation in lithographic K1 factor (and a corresponding simplification of process complexity) vs. existing 193nm lithography. The increased K1 factor will result in some complexity reduction for mask synthesis flow elements (including illumination source shape optimization, design pre-processing, RET, OPC and OPC verification). However, EUV does add well known additional complexities and issues to mask synthesis flows such as across-lens shadowing variation, across reticle flare variation, new proximity effects to be modeled, significant increase in pre-OPC and fracture file size, etc. In this paper, we investigate the expected EUV-specific issues and new requirements for a production tapeout mask synthesis flow. The production EUV issues and new requirements are in the categories of additional physical effects to be corrected for; additional automation or flow steps needed; and increase in file size at different parts in the flow. For example, OASIS file sizes after OPC of 250GigaBytes (GB) and files sizes after mask data prep of greater than three TeraBytes (TB) are expected to be common. These huge file sizes will place significant stress on post-processing methods, OPC verification, mask data fracture, file read-in/read-out, data transfer between sites (e.g., to the maskshop), etc. With current methods and procedures, it is clear that the hours/days needed to complete EUV mask synthesis mask data flows would significantly increase if steps are not taken to make efficiency improvements
Southwest Project: resource/institutional requirements analysis. Volume II. Technical studies
Energy Technology Data Exchange (ETDEWEB)
Ormsby, L. S.; Sawyer, T. G.; Brown, Dr., M. L.; Daviet, II, L. L.; Weber, E. R.; Brown, J. E.; Arlidge, J. W.; Novak, H. R.; Sanesi, Norman; Klaiman, H. C.; Spangenberg, Jr., D. T.; Groves, D. J.; Maddox, J. D.; Hayslip, R. M.; Ijams, G.; Lacy, R. G.; Montgomery, J.; Carito, J. A.; Ballance, J. W.; Bluemle, C. F.; Smith, D. N.; Wehrey, M. C.; Ladd, K. L.; Evans, Dr., S. K.; Guild, D. H.; Brodfeld, B.; Cleveland, J. A.; Hicks, K. L.; Noga, M. W.; Ross, A. M.
1979-12-01
The project provides information which could be used to accelerate the commercialization and market penetration of solar electric generation plants in the southwestern region of the United States. The area of concern includes Arizona, California, Colorado, Nevada, New Mexico, Utah, and sections of Oklahoma and Texas. The project evaluated the potential integration of solar electric generating facilities into the existing electric grids of the region through the year 2000. The technologies included wind energy conversion, solar thermal electric, solar photovoltaic conversion, and hybrid solar electric systems. Each of the technologies considered, except hybrid solar electric, was paired with a compatible energy storage system to improve plant performance and enhance applicability to a utility grid system. The hybrid concept utilizes a conventionally-fueled steam generator as well as a solar steam generator so it is not as dependent upon the availability of solar energy as are the other concepts. Operation of solar electric generating plants in conjunction with existing hydroelectric power facilities was also studied. The participants included 12 electric utility companies and a state power authority in the southwestern US, as well as a major consulting engineering firm. An assessment of the state-of-the-art of solar electric generating plants from an electric utility standpoint; identification of the electric utility industry's technical requirements and considerations for solar electric generating plants; estimation of the capital investment, operation, and maintenance costs for solar electric generating plants; and determination of the capital investment of conventional fossil and nuclear electric generating plants are presented. (MCW)
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... Arithmetic mean of objects in a space need not lie in the space. [Frechet; 1948] Finding mean of right-angled triangles. S = {(x,y,z) ∈ R+3 : x2 + y2 = z2}. = {. [ z x − ιy x + ιy z. ] : x,y,z > 0,z2 = x2 + y2}. Surface of right triangles : Arithmetic mean not on S. Tanvi Jain. Averaging operations on matrices ...
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. ... then the expected extension of geometric mean A1/2B1/2 is not even self-adjoint, leave alone positive definite. Tanvi Jain. Averaging operations on matrices ...
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Americans' Average Radiation Exposure
International Nuclear Information System (INIS)
2000-01-01
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... to a non-linear manifold and re-normalization or orthogonalization must be applied to obtain proper rotations. These latter steps have been viewed as ad hoc corrections for the errors introduced by assuming a vector space. The article shows that the two approximative methods can be derived from natural...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation....
Average action for models with fermions
International Nuclear Information System (INIS)
Bornholdt, S.; Wetterich, C.
1993-01-01
The average action is a new tool for investigating spontaneous symmetry breaking in elementary particle theory and statistical mechanics beyond the validity of standard perturbation theory. The aim of this work is to provide techniques for an investigation of models with fermions and scalars by means of the average potential. In the phase with spontaneous symmetry breaking, the inner region of the average potential becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations in this region necessitate a calculation of the fermion determinant in a spin wave background. We also compute the fermionic contribution to the wave function renormalization in the scalar kinetic term. (orig.)
Directory of Open Access Journals (Sweden)
Mônica Deolindo Santiago
2008-04-01
Full Text Available OBJETIVO: obter uma equação capaz de estimar o volume de concentrado de hemácias a ser infundido para correção da anemia em fetos de gestantes portadoras de isoimunização pelo fator Rh, baseado em parâmetros alcançados durante a cordocentese prévia à transfusão intra-uterina. MÉTODOS: em estudo transversal, foram analisadas 89 transfusões intra-uterinas para correção de anemia em 48 fetos acompanhados no Centro de Medicina Fetal do Hospital das Clínicas da Universidade Federal de Minas Gerais. A idade gestacional mediana, no momento da cordocentese, foi de 29 semanas e a média de procedimentos por feto foi de 2,1. A hemoglobina fetal foi dosada antes e após a cordocentese, sendo verificado o volume de concentrado de hemácias transfundido. Para determinação de uma fórmula para estimar o volume sanguíneo necessário para correção da anemia fetal, tomou-se como base o volume necessário para elevar em 1 g% a hemoglobina fetal (diferença entre a concentração de hemoglobina final e a inicial, dividida pelo volume transfundido e o volume de quanto seria necessário para se atingir 14 g%, em análise de regressão múltipla. RESULTADOS: a concentração da hemoglobina pré-transfusional variou entre 2,3 e 15,7 g%. A prevalência de anemia fetal (HbPURPOSE: to obtain an equation to estimate the volume of red blood cells concentrate to be infused to correct anemia in fetuses of pregnant women with Rh factor isoimmunization, based in parameters obtained along the cordocentesis previous to intrauterine transfusion. METHODS: a transversal study analyzing 89 intrauterine transfusions to correct anemia in 48 fetuses followed-up in the Centro de Medicina Fetal do Hospital das Clínicas da Universidade de Minas Gerais. The median gestational age at the cordocentesis was 29 weeks and the average number of procedures was 2.1. Fetal hemoglobin was assayed before and after cordocentesis, leading to the volume of transfused red blood
Little (Arthur D.), Inc., San Francisco, CA.
The economy, population, and manpower requirements of the Kansas City metropolitan area are examined in this volume of a report for the planning and development of Metropolitan Junior College (MJC). Part I looks at the Kansas City economy, first from a historical perspective and then in terms of recent trends in economic growth; the comparative…
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW
Multiple-level defect species evaluation from average carrier decay
Debuf, Didier
2003-10-01
An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.
Malone, Christopher D; Banerjee, Arjun; Alley, Marcus T; Vasanawala, Shreyas S; Roberts, Anne C; Hsiao, Albert
2018-01-01
We report here an initial experience using 4D flow MRI in pelvic imaging-specifically, in imaging uterine fibroids. We hypothesized that blood flow might correlate with fibroid volume and that quantifying blood flow might help to predict the amount of embolic required to achieve stasis at subsequent uterine fibroid embolization (UFE). Thirty-three patients with uterine fibroids and seven control subjects underwent pelvic MRI with 4D flow imaging. Of the patients with fibroids, 10 underwent 4D flow imaging before UFE and seven after UFE; in the remaining 16 patients with fibroids, UFE had yet to be performed. Four-dimensional flow measurements were performed using Arterys CV Flow. The flow fraction of the internal iliac artery was expressed as the ratio of internal iliac artery flow to external iliac artery flow and was compared between groups. The flow ratios between the internal iliac arteries on each side were calculated. Fibroid volume versus internal iliac flow fraction, embolic volume versus internal iliac flow fraction, and embolic volume ratio between sides versus the ratio of internal iliac artery flows between sides were compared. The mean internal iliac flow fraction was significantly higher in the 26 patients who underwent imaging before UFE (mean ± standard error, 0.78 ± 0.06) than in the seven patients who underwent imaging after UFE (0.48 ± 0.07, p flow fraction correlated well with fibroid volumes before UFE (r = 0.7754, p flow (r = 0.6776, p = 0.03). Internal iliac flow measured by 4D flow MRI correlates with fibroid volume and is predictive of the ratio of embolic required to achieve stasis on each side at subsequent UFE and may be useful for preprocedural evaluation of patients with uterine fibroids.
Correctional Facility Average Daily Population
Montgomery County of Maryland — This dataset contains Accumulated monthly with details from Pre-Trial Average daily caseload * Detention Services, Average daily population for MCCF, MCDC, PRRS and...
The difference between alternative averages
Directory of Open Access Journals (Sweden)
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
The flattening of the average potential in models with fermions
International Nuclear Information System (INIS)
Bornholdt, S.
1993-01-01
The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)
Computation of the bounce-average code
International Nuclear Information System (INIS)
Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.
1977-01-01
The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended
Hall, Edward; Isaacs, James; Henriksen, Steve; Zelkin, Natalie
2011-01-01
This report is provided as part of ITT s NASA Glenn Research Center Aerospace Communication Systems Technical Support (ACSTS) contract NNC05CA85C, Task 7: New ATM Requirements-Future Communications, C-Band and L-Band Communications Standard Development and was based on direction provided by FAA project-level agreements for New ATM Requirements-Future Communications. Task 7 included two subtasks. Subtask 7-1 addressed C-band (5091- to 5150-MHz) airport surface data communications standards development, systems engineering, test bed and prototype development, and tests and demonstrations to establish operational capability for the Aeronautical Mobile Airport Communications System (AeroMACS). Subtask 7-2 focused on systems engineering and development support of the L-band digital aeronautical communications system (L-DACS). Subtask 7-1 consisted of two phases. Phase I included development of AeroMACS concepts of use, requirements, architecture, and initial high-level safety risk assessment. Phase II builds on Phase I results and is presented in two volumes. Volume I (this document) is devoted to concepts of use, system requirements, and architecture, including AeroMACS design considerations. Volume II describes an AeroMACS prototype evaluation and presents final AeroMACS recommendations. This report also describes airport categorization and channelization methodologies. The purposes of the airport categorization task were (1) to facilitate initial AeroMACS architecture designs and enable budgetary projections by creating a set of airport categories based on common airport characteristics and design objectives, and (2) to offer high-level guidance to potential AeroMACS technology and policy development sponsors and service providers. A channelization plan methodology was developed because a common global methodology is needed to assure seamless interoperability among diverse AeroMACS services potentially supplied by multiple service providers.
Directory of Open Access Journals (Sweden)
G. H. de Rooij
2009-07-01
Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.
Hatterick, G. R.
1972-01-01
The data sheets presented contain the results of the task analysis portion of the study to identify skill requirements of space shuttle crew personnel. A comprehensive data base is provided of crew functions, operating environments, task dependencies, and task-skills applicable to a representative cross section of earth orbital research experiments.
Delehanty, Kathleen
1983-01-01
Recent historical trends (1977-1978 through 1983-1984) in tuition and required fee charges in New Jersey colleges and universities are presented. Differences among New Jersey collegiate sectors and among different types of students (full- and part-time, undergraduate and graduate, resident and nonresident) are analyzed in terms of dollar and…
Averaging theorems in finite deformation plasticity
Nemat-Nasser, S C
1999-01-01
The transition from micro- to macro-variables of a representative volume element (RVE) of a finitely deformed aggregate (e.g., a composite or a polycrystal) is explored. A number of exact fundamental results on averaging techniques, $9 valid at finite deformations and rotations of any arbitrary heterogeneous continuum, are obtained. These results depend on the choice of suitable kinematic and dynamic variables. For finite deformations, the deformation gradient and $9 its rate, and the nominal stress and its rate, are optimally suited for the averaging purposes. A set of exact identities is presented in terms of these variables. An exact method for homogenization of an ellipsoidal inclusion in an $9 unbounded finitely deformed homogeneous solid is presented, generalizing Eshelby's method for application to finite deformation problems. In terms of the nominal stress rate and the rate of change of the deformation gradient, $9 measured relative to any arbitrary state, a general phase-transformation problem is con...
Convergence of multiple ergodic averages
Host, Bernard
2006-01-01
These notes are based on a course for a general audience given at the Centro de Modeliamento Matem\\'atico of the University of Chile, in December 2004. We study the mean convergence of multiple ergodic averages, that is, averages of a product of functions taken at different times. We also describe the relations between this area of ergodic theory and some classical and some recent results in additive number theory.
Johnson, Kenneth L.; White, K, Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.
Cryo-Electron Tomography and Subtomogram Averaging.
Wan, W; Briggs, J A G
2016-01-01
Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. © 2016 Elsevier Inc. All rights reserved.
2010-07-01
... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....
Averaging of multivalued differential equations
Directory of Open Access Journals (Sweden)
G. Grammel
2003-04-01
Full Text Available Nonlinear multivalued differential equations with slow and fast subsystems are considered. Under transitivity conditions on the fast subsystem, the slow subsystem can be approximated by an averaged multivalued differential equation. The approximation in the Hausdorff sense is of order O(ÃÂµ1/3 as ÃÂµÃ¢Â†Â’0.
Fuzzy Weighted Average: Analytical Solution
van den Broek, P.M.; Noppen, J.A.R.
2009-01-01
An algorithm is presented for the computation of analytical expressions for the extremal values of the α-cuts of the fuzzy weighted average, for triangular or trapeizoidal weights and attributes. Also, an algorithm for the computation of the inverses of these expressions is given, providing exact
High average power supercontinuum sources
Indian Academy of Sciences (India)
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.
Polyhedral Painting with Group Averaging
Farris, Frank A.; Tsao, Ryan
2016-01-01
The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…
Directory of Open Access Journals (Sweden)
Sengupta Papiya
2004-11-01
Full Text Available Abstract Background Cuff pressure in endotracheal (ET tubes should be in the range of 20–30 cm H2O. We tested the hypothesis that the tube cuff is inadequately inflated when manometers are not used. Methods With IRB approval, we studied 93 patients under general anesthesia with an ET tube in place in one teaching and two private hospitals. Anesthetists were blinded to study purpose. Cuff pressure in tube sizes 7.0 to 8.5 mm was evaluated 60 min after induction of general anesthesia using a manometer connected to the cuff pilot balloon. Nitrous oxide was disallowed. After deflating the cuff, we reinflated it in 0.5-ml increments until pressure was 20 cmH2O. Results Neither patient morphometrics, institution, experience of anesthesia provider, nor tube size influenced measured cuff pressure (35.3 ± 21.6 cmH2O. Only 27% of pressures were within 20–30 cmH2O; 27% exceeded 40 cmH2O. Although it varied considerably, the amount of air required to achieve a cuff pressure of 20 cmH2O was similar with each tube size. Conclusion We recommend that ET cuff pressure be set and monitored with a manometer.
Improved averaging for non-null interferometry
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
When good = better than average
Directory of Open Access Journals (Sweden)
Don A. Moore
2007-10-01
Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.
7 CFR 51.2561 - Average moisture content.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...
Brown, Richard J C; Woods, Peter T
2012-01-01
A comparison of various averaging techniques to calculate the Average Exposure Indicator (AEI) specified in European Directive 2008/50/EC for particulate matter in ambient air has been performed. This was done for data from seventeen sites around the UK for which PM(10) mass concentration data is available for the years 1998-2000 and 2008-2010 inclusive. The results have shown that use of the geometric mean produces significantly lower AEI values within the required three year averaging periods and slightly lower changes in the AEI value between the three year averaging periods than the use of the arithmetic mean. The use of weighted means in the calculation, using the data capture at each site as the weighting parameter, has also been tested and this is proposed as a useful way of taking account of the confidence of each data set.
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Subdiffusion in time-averaged, confined random walks.
Neusius, Thomas; Sokolov, Igor M; Smith, Jeremy C
2009-07-01
Certain techniques characterizing diffusive processes, such as single-particle tracking or molecular dynamics simulation, provide time averages rather than ensemble averages. Whereas the ensemble-averaged mean-squared displacement (MSD) of an unbounded continuous time random walk (CTRW) with a broad distribution of waiting times exhibits subdiffusion, the time-averaged MSD, delta2, does not. We demonstrate that, in contrast to the unbounded CTRW, in which delta2 is linear in the lag time Delta, the time-averaged MSD of the CTRW of a walker confined to a finite volume is sublinear in Delta, i.e., for long lag times delta2 approximately Delta1-alpha. The present results permit the application of CTRW to interpret time-averaged experimental quantities.
Directory of Open Access Journals (Sweden)
Naseem Cassim
2017-02-01
Full Text Available Introduction: CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC testing sites need investigation. Objectives: We assessed the impact of relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory and POC testing sites. Methods: The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T. The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours. Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Results: Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. Conclusion: The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.
Atomic configuration average simulations for plasma spectroscopy
International Nuclear Information System (INIS)
Kilcrease, D.P.; Abdallah, J. Jr.; Keady, J.J.; Clark, R.E.H.
1993-01-01
Configuration average atomic physics based on Hartree-Fock methods and an unresolved transition array (UTA) simulation theory are combined to provide a computationally efficient approach for calculating the spectral properties of plasmas involving complex ions. The UTA theory gives an overall representation for the many lines associated with a radiative transition from one configuration to another without calculating the fine structure in full detail. All of the atomic quantities required for synthesis of the spectrum are calculated in the same approximation and used to generate the parameters required for representation of each UTA, the populations of the various atomic states, and the oscillator strengths. We use this method to simulate the transmission of x-rays through an aluminium plasma. (author)
Significance of power average of sinusoidal and non-sinusoidal ...
Indian Academy of Sciences (India)
Home; Journals; Pramana – Journal of Physics; Volume 87; Issue 1. Significance of power average of ... Additional sinusoidal and different non-sinusoidal periodic perturbations applied to the periodically forced nonlinear oscillators decide the maintainance or inhibitance of chaos. It is observed that the weak amplitude of ...
Slaughter, M R; Birmingham, J M; Patel, B; Whelan, G A; Krebs-Brown, A J; Hockings, P D; Osborne, J A
2002-10-01
Important in all experimental animal studies is the need to control stress stimuli associated with environmental change and experimental procedures. As the stress response involves alterations in levels of vasoactive hormones, ensuing changes in cardiovascular parameters may confound experimental outcomes. Accordingly, we evaluated the duration required for dogs (n = 4) to acclimatized to frequent blood sampling that involved different procedures. On each sampling occasion during a 6-week period, dogs were removed from their pen to a laboratory area and blood was collected either by venepuncture (days 2, 15, 34, 41) for plasma renin activity (PRA), epinephrine (EPI), norepinephrine, aldosterone, insulin, and atrial natriuretic peptide, or by cannulation (dogs restrained in slings; days 1, 8, 14, 22, 30, 33, 37, 40) for determination of haematocrit (HCT) alone (days 1 to 22) or HCT with plasma volume (PV; days 30 to 40). PRA was higher on days 2 and 15 compared with days 34 and 41 and had decreased by up to 48% by the end of the study (day 41 vs day 15; mean/SEM: 1.18/0.27 vs 2.88/0.79 ng ANG I/ml/h, respectively). EPI showed a time-related decrease from days 2 to 34, during which mean values had decreased by 51% (mean/SEM: 279/29 vs 134/20.9 pg/ml for days 2 and 34, respectively), but appeared stable from then on. None of the other hormones showed any significant variability throughout the course of the study. HCT was relatively variable between days 1 to 22 but stabilized from day 30, after which all mean values were approximately 6% lower than those between days 1 and 8. We conclude that an acclimatization period of at least 4 weeks is required to eliminate stress-related effects in dogs associated with periodic blood sampling.
Site Averaged Neutron Soil Moisture: 1988 (Betts)
National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the neutron probe soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...
Site Averaged Gravimetric Soil Moisture: 1989 (Betts)
National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...
Site Averaged Gravimetric Soil Moisture: 1988 (Betts)
National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...
Site Averaged Gravimetric Soil Moisture: 1987 (Betts)
National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...
Site Averaged Gravimetric Soil Moisture: 1987 (Betts)
National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...
Scalable Robust Principal Component Analysis Using Grassmann Averages
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi
2016-01-01
Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video...
Averaging of nonlinearity-managed pulses
International Nuclear Information System (INIS)
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
Averaging in cosmological models using scalars
International Nuclear Information System (INIS)
Coley, A A
2010-01-01
The averaging problem in cosmology is of considerable importance for the correct interpretation of cosmological data. A rigorous mathematical definition of averaging in a cosmological model is necessary. In general, a spacetime is completely characterized by its scalar curvature invariants, and this suggests a particular spacetime averaging scheme based entirely on scalars. We clearly identify the problems of averaging in a cosmological model. We then present a precise definition of a cosmological model, and based upon this definition, we propose an averaging scheme in terms of scalar curvature invariants. This scheme is illustrated in a simple static spherically symmetric perfect fluid cosmological spacetime, where the averaging scales are clearly identified.
The average size of ordered binary subgraphs
van Leeuwen, J.; Hartel, Pieter H.
To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a
Bayesian Model Averaging for Propensity Score Analysis.
Kaplan, David; Chen, Jianshen
2014-01-01
This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.
Cosmological ensemble and directional averages of observables
Bonvin, Camille; Durrer, Ruth; Maartens, Roy; Umeh, Obinna
2015-01-01
We show that at second order ensemble averages of observables and directional averages do not commute due to gravitational lensing. In principle this non-commutativity is significant for a variety of quantities we often use as observables. We derive the relation between the ensemble average and the directional average of an observable, at second-order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focussing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance is increased by gravitational lensing, whereas the directional average of the distance is decreased. We show that for a generic observable, there exists a particular function of the observable that is invariant under second-order lensing perturbations.
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
Efficient processing of CFRP with a picosecond laser with up to 1.4 kW average power
Onuseit, V.; Freitag, C.; Wiedenmann, M.; Weber, R.; Negel, J.-P.; Löscher, A.; Abdou Ahmed, M.; Graf, T.
2015-03-01
Laser processing of carbon fiber reinforce plastic (CFRP) is a very promising method to solve a lot of the challenges for large-volume production of lightweight constructions in automotive and airplane industries. However, the laser process is actual limited by two main issues. First the quality might be reduced due to thermal damage and second the high process energy needed for sublimation of the carbon fibers requires laser sources with high average power for productive processing. To achieve thermal damage of the CFRP of less than 10μm intensities above 108 W/cm² are needed. To reach these high intensities in the processing area ultra-short pulse laser systems are favored. Unfortunately the average power of commercially available laser systems is up to now in the range of several tens to a few hundred Watt. To sublimate the carbon fibers a large volume specific enthalpy of 85 J/mm³ is necessary. This means for example that cutting of 2 mm thick material with a kerf width of 0.2 mm with industry-typical 100 mm/sec requires several kilowatts of average power. At the IFSW a thin-disk multipass amplifier yielding a maximum average output power of 1100 W (300 kHz, 8 ps, 3.7 mJ) allowed for the first time to process CFRP at this average power and pulse energy level with picosecond pulse duration. With this unique laser system cutting of CFRP with a thickness of 2 mm an effective average cutting speed of 150 mm/sec with a thermal damage below 10μm was demonstrated.
Optimal bounds and extremal trajectories for time averages in dynamical systems
Tobasco, Ian; Goluskin, David; Doering, Charles
2017-11-01
For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.
Time average vibration fringe analysis using Hilbert transformation
International Nuclear Information System (INIS)
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-01-01
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Rotter, Robert; Schmitt, Lena; Gierer, Philip; Schmitz, Klaus-Peter; Noriega, David; Mittlmeier, Thomas; Meeder, Peter-J; Martin, Heiner
2015-08-01
Minimally invasive treatment of vertebral fractures is basically characterized by cement augmentation. Using the combination of a permanent implant plus cement, it is now conceivable that the amount of cement can be reduced and so this augmentation could be an attractive opportunity for use in traumatic fractures in young and middle-aged patients. The objective of this study was to determine the smallest volume of cement necessary to stabilize fractured vertebrae comparing the SpineJack system to the gold standard, balloon kyphoplasty. 36 fresh frozen human cadaveric vertebral bodies (T11-L3) were utilized. After creating typical compression wedge fractures (AO A1.2.1), the vertebral bodies were reduced by SpineJack (n=18) or kyphoplasty (n=18) under preload (100N). Subsequently, different amounts of bone cement (10%, 16% or 30% of the vertebral body volume) were inserted. Finally, static and dynamic biomechanical tests were performed. Following augmentation and fatigue tests, vertebrae treated with SpineJack did not show any significant loss of intraoperative height gain, in contrast to kyphoplasty. In the 10% and 16%-group the height restoration expressed as a percentage of the initial height was significantly increased with the SpineJack (>300%). Intraoperative SpineJack could preserve the maximum height gain (mean 1% height loss) better than kyphoplasty (mean 16% height loss). In traumatic wedge fractures it is possible to reduce the amount of cement to 10% of the vertebral body volume when SpineJack is used without compromising the reposition height after reduction, in contrast to kyphoplasty that needs a 30% cement volume. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spacetime averaging of exotic singularity universes
International Nuclear Information System (INIS)
Dabrowski, Mariusz P.
2011-01-01
Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.
NOAA Average Annual Salinity (3-Zone)
California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
NOAA Average Annual Salinity (3-Zone)
California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
High-Average, High-Peak Current Injector Design
Biedron, S G; Virgo, M
2005-01-01
There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.
International Nuclear Information System (INIS)
Vandevender, S.G.
1980-04-01
The report contained in this volume describes a program for management of the community impacts resulting from the growth of uranium mining and milling in New Mexico. The report, submitted to Sandia Laboratories by the New Mexico Department of Energy and Minerals, is reproduced without modification. The state recommends that federal funding and assistance be provided to implement a growth management program comprised of these seven components: (1) an early warning system, (2) a community planning and technical assistance capability, (3) flexible financing, (4) a growth monitoring system, (5) manpower training, (6) economic diversification planning, and (7) new technology testing
Fixed Average Spectra of Orchestral Instrument Tones
Directory of Open Access Journals (Sweden)
Joseph Plazak
2010-04-01
Full Text Available The fixed spectrum for an average orchestral instrument tone is presented based on spectral data from the Sandell Harmonic Archive (SHARC. This database contains non-time-variant spectral analyses for 1,338 recorded instrument tones from 23 Western instruments ranging from contrabassoon to piccolo. From these spectral analyses, a grand average was calculated, providing what might be considered an average non-time-variant harmonic spectrum. Each of these tones represents the average of all instruments in the SHARC database capable of producing that pitch. These latter tones better represent common spectral changes with respect to pitch register, and might be regarded as an “average instrument.” Although several caveats apply, an average harmonic tone or instrument may prove useful in analytic and modeling studies. In addition, for perceptual experiments in which non-time-variant stimuli are needed, an average harmonic spectrum may prove to be more ecologically appropriate than common technical waveforms, such as sine tones or pulse trains. Synthesized average tones are available via the web.
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
is less clear if the teacher distribution is unknown. I define a class of averaging procedures, the temperated likelihoods, including both Bayes averaging with a uniform prior and maximum likelihood estimation as special cases. I show that Bayes is generalization optimal in this family for any teacher...
Determinants of College Grade Point Averages
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
International Nuclear Information System (INIS)
Beamesderfer, R.C.; Nigro, A.A.
1995-01-01
This is the final report for research on white sturgeon Acipenser transmontanus from 1986--92 and conducted by the National Marine Fisheries Service (NMFS), Oregon Department of Fish and Wildlife (ODFW), US Fish and Wildlife Service (USFWS), and Washington Department of Fisheries (WDF). Findings are presented as a series of papers, each detailing objectives, methods, results, and conclusions for a portion of this research. This volume includes supplemental papers which provide background information needed to support results of the primary investigations addressed in Volume 1. This study addresses measure 903(e)(1) of the Northwest Power Planning Council's 1987 Fish and Wildlife Program that calls for ''research to determine the impact of development and operation of the hydropower system on sturgeon in the Columbia River Basin.'' Study objectives correspond to those of the ''White Sturgeon Research Program Implementation Plan'' developed by BPA and approved by the Northwest Power Planning Council in 1985. Work was conducted on the Columbia River from McNary Dam to the estuary
Small scale magnetic flux-averaged magnetohydrodynamics
International Nuclear Information System (INIS)
Pfirsch, D.; Sudan, R.N.
1994-01-01
By relaxing exact magnetic flux conservation below a scale λ a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when λ is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time τ but not for less than τ. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits
Averaging of diffusing contaminant concentrations in atmosphere surface layer
International Nuclear Information System (INIS)
Ivanov, E.A.; Ramzina, T.V.
1985-01-01
Calculations permitting to average concentration fields of diffusing radioactive contaminant coming from the NPP exhaust stack in the atmospheric surface layer are given. Formulae of contaminant concentration field calculation are presented; it depends on the average wind direction value (THETA) for time(T) and stability of this direction (σsub(tgTHETA) or σsub(THETA)). Probability of wind direction deviation from the average value for time T is satisfactory described by the Gauss law. With instability increase in the atmosphere σ increases, when wind velocity increasing the values of σ decreases for all types of temperature gradients. Nonuniformity of σ value dependence on averaging time T is underlined, that requires accurate choice of σsub(tgTHETA) and σsub(THETA) parameters in calculations
International Nuclear Information System (INIS)
Ray A. Berry
2005-01-01
At the INL researchers and engineers routinely encounter multiphase, multi-component, and/or multi-material flows. Some examples include: Reactor coolant flows Molten corium flows Dynamic compaction of metal powders Spray forming and thermal plasma spraying Plasma quench reactor Subsurface flows, particularly in the vadose zone Internal flows within fuel cells Black liquor atomization and combustion Wheat-chaff classification in combine harvesters Generation IV pebble bed, high temperature gas reactor The complexity of these flows dictates that they be examined in an averaged sense. Typically one would begin with known (or at least postulated) microscopic flow relations that hold on the ''small'' scale. These include continuum level conservation of mass, balance of species mass and momentum, conservation of energy, and a statement of the second law of thermodynamics often in the form of an entropy inequality (such as the Clausius-Duhem inequality). The averaged or macroscopic conservation equations and entropy inequalities are then obtained from the microscopic equations through suitable averaging procedures. At this stage a stronger form of the second law may also be postulated for the mixture of phases or materials. To render the evolutionary material flow balance system unique, constitutive equations and phase or material interaction relations are introduced from experimental observation, or by postulation, through strict enforcement of the constraints or restrictions resulting from the averaged entropy inequalities. These averaged equations form the governing equation system for the dynamic evolution of these mixture flows. Most commonly, the averaging technique utilized is either volume or time averaging or a combination of the two. The flow restrictions required for volume and time averaging to be valid can be severe, and violations of these restrictions are often found. A more general, less restrictive (and far less commonly used) type of averaging known as
Average-passage flow model development
Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark
1989-01-01
A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.
Computation of the average energy for LXY electrons
International Nuclear Information System (INIS)
Grau Carles, A.; Grau, A.
1996-01-01
The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs
The Effect of Honors Courses on Grade Point Averages
Spisak, Art L.; Squires, Suzanne Carter
2016-01-01
High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…
Monthly snow/ice averages (ISCCP)
National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Schedule of average annual equipment ownership expense
2003-03-06
The "Schedule of Average Annual Equipment Ownership Expense" is designed for use on Force Account bills of Contractors performing work for the Illinois Department of Transportation and local government agencies who choose to adopt these rates. This s...
Should the average tax rate be marginalized?
Czech Academy of Sciences Publication Activity Database
Feldman, N. E.; Katuščák, Peter
-, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf
Symmetric Euler orientation representations for orientational averaging.
Mayerhöfer, Thomas G
2005-09-01
A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.
Aplikasi Moving Average Filter Pada Teknologi Enkripsi
Hermawi, Adrianto
2007-01-01
A method of encrypting and decrypting is introduced. The type of information experimented on is a mono wave sound file with frequency 44 KHZ. The encryption technology uses a regular noise wave sound file (with equal frequency) and moving average filter to decrypt and obtain the original signal. All experiments are programmed using MATLAB. By the end of the experiment the author concludes that the Moving Average Filter can indeed be used as an alternative to encryption technology.
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Interpreting Sky-Averaged 21-cm Measurements
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation
Energy Technology Data Exchange (ETDEWEB)
1985-04-01
This document reports the position and recommendations of the NRC Piping Review Committee, Task Group on Seismic Design. The Task Group considered overlapping conservation in the various steps of seismic design, the effects of using two levels of earthquake as a design criterion, and current industry practices. Issues such as damping values, spectra modification, multiple response spectra methods, nozzle and support design, design margins, inelastic piping response, and the use of snubbers are addressed. Effects of current regulatory requirements for piping design are evaluated, and recommendations for immediate licensing action, changes in existing requirements, and research programs are presented. Additional background information and suggestions given by consultants are also presented.
40 CFR 80.825 - How is the refinery or importer annual average toxics value determined?
2010-07-01
... volume of applicable gasoline produced or imported in batch i. Ti = The toxics value of batch i. n = The number of batches of gasoline produced or imported during the averaging period. i = Individual batch of gasoline produced or imported during the averaging period. (b) The calculation specified in paragraph (a...
Averaging processes in granular flows driven by gravity
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
1974-01-01
The task phase concerned with the requirements, design, and planning studies for the carry-on laboratory (COL) began with a definition of biomedical research areas and candidate research equipment, and then went on to develop conceptual layouts for COL which were each evaluated in order to arrive at a final conceptual design. Each step in this design/evaluation process concerned itself with man/systems integration research and hardware, and life support and protective systems research and equipment selection. COL integration studies were also conducted and include attention to electrical power and data management requirements, operational considerations, and shuttle/Spacelab interface specifications. A COL program schedule was compiled, and a cost analysis was finalized which takes into account work breakdown, annual funding, and cost reduction guidelines.
Determination of clothing microclimate volume
Daanen, Hein; Hatcher, Kent; Havenith, George
2005-01-01
The average air layer thickness between human skin and clothing is an important factor in heat transfer. The trapped volume between skin and clothing is an estimator for everage air layer thickness. Several techniques are available to determine trapped volume. This study investigates the reliability
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Backus and Wyllie Averages for Seismic Attenuation
Qadrouh, Ayman N.; Carcione, José M.; Ba, Jing; Gei, Davide; Salim, Ahmed M.
2018-01-01
Backus and Wyllie equations are used to obtain average seismic velocities at zero and infinite frequencies, respectively. Here, these equations are generalized to obtain averages of the seismic quality factor (inversely proportional to attenuation). The results indicate that the Wyllie velocity is higher than the corresponding Backus quantity, as expected, since the ray velocity is a high-frequency limit. On the other hand, the Wyllie quality factor is higher than the Backus one, following the velocity trend, i.e., the higher the velocity (the stiffer the medium), the higher the attenuation. Since the quality factor can be related to properties such as porosity, permeability, and fluid viscosity, these averages can be useful for evaluating reservoir properties.
Exploiting scale dependence in cosmological averaging
International Nuclear Information System (INIS)
Mattsson, Teppo; Ronkainen, Maria
2008-01-01
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
International Nuclear Information System (INIS)
Vandevender, S.G.
1980-04-01
This volume contains Two parts: Part One is an analysis of an issue paper prepared by the office of the New Mexico State Engineer on water availability for uranium production. Part Two is the issue paper itself. The State Engineer's report raises the issue of a scarce water supply in the San Juan Structural Basin acting as a constraint on the growth of the uranium mining and milling industry in New Mexico. The water issue in the structural basin is becoming an acute policy issue because of the uranium industry's importance to and rapid growth within the structural basin. Its growth places heavy demands on the region's scarce water supply. The impact of mine dewatering on water supply is of particular concern. Much of the groundwater has been appropriated or applied for. The State Engineer is currently basing water rights decisions upon data which he believes to be inadequate to determine water quality and availability in the basin. He, along with the USGS and the State Bureau of Mines and Mineral Resources, recommends a well drilling program to acquire the additional information about the groundwater characteristics of the basin. The information would be used to provide input data for a computer model, which is used as one of the bases for decisions concerning water rights and water use in the basin. The recommendation is that the appropriate DOE office enter into discussions with the New Mexico State Engineer to explore the potential mutual benefits of a well drilling program to determine the water availability in the San Juan Structural Basin
ANTINOMY OF THE MODERN AVERAGE PROFESSIONAL EDUCATION
Directory of Open Access Journals (Sweden)
A. A. Listvin
2017-01-01
of ways of their decision and options of the valid upgrade of the SPE system answering to the requirements of economy. The inefficiency of the concept of one-leveled SPE and its non-competitiveness against the background of development of an applied bachelor degree at the higher school is shown. It is offered to differentiate programs of basic level for training of skilled workers and the program of the increased level for training of specialists of an average link (technicians, technologists on the basis of basic level for forming of a single system of continuous professional training and effective functioning of regional systems of professional education. Such system will help to eliminate disproportions in a triad «a worker – a technician – an engineer», and will increase the quality of professional education. Furthermore, it is indicated the need of polyprofessional education wherein the integrated educational structures differing in degree of formation of split-level educational institutions on the basis of network interaction, convergence and integration are required. According to the author, in the regions it is necessary to develop two types of organizations and SPE organizations: territorial multi-profile colleges with flexible variable programs and the organizations realizing educational programs of applied qualifications in specific industries (metallurgical, chemical, construction, etc. according to the specifics of economy of territorial subjects.Practical significance. The results of the research can be useful to specialists of management of education, heads and pedagogical staff of SPE institutions, and also representatives of regional administrations and employers while organizing the multilevel network system of training of skilled workers and experts of middle ranking.
1979-01-01
Recommendations for logistics activities and logistics planning are presented based on the assumption that a system prime contractor will perform logistics functions to support all program hardware and will implement a logistics system to include the planning and provision of products and services to assure cost effective coverage of the following: maintainability; maintenance; spares and supply support; fuels; pressurants and fluids; operations and maintenance documentation training; preservation, packaging and packing; transportation and handling; storage; and logistics management information reporting. The training courses, manpower, materials, and training aids required will be identified and implemented in a training program.
Average Costs versus Net Present Value
E.A. van der Laan (Erwin); R.H. Teunter (Ruud)
2000-01-01
textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives
Quantum Averaging of Squeezed States of Light
DEFF Research Database (Denmark)
Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating ...
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...
Average beta measurement in EXTRAP T1
International Nuclear Information System (INIS)
Hedin, E.R.
1988-12-01
Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)
Average Transverse Momentum Quantities Approaching the Lightfront
Boer, Daniel
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the p (T) broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large
Averages of operators in finite Fermion systems
International Nuclear Information System (INIS)
Ginocchio, J.N.
1980-01-01
The important ingredients in the spectral analysis of Fermion systems are the average of operators. In this paper we shall derive expressions for averages of operators in truncated Fermion spaces in terms of the minimal information needed about the operator. If we take the operator to be powers of the Hamiltonian we can then study the conditions on a Hamiltonian for the eigenvalues of the Hamiltonian in the truncated space to be Gaussian distributed. The theory of scalar traces is reviewed, and the dependence on nucleon number and single-particle states is reviewed. These results are used to show that a dilute non-interacting system will have Gaussian distributed eigenvalues, i.e., its cumulants will tend to zero, for a large number of Fermions. The dominant terms in the cumulants of a dilute interacting Fermion system are derived. In this case the cumulants depend crucially on the interaction even for a large number of Fermions. Configuration averaging is briefly discussed. Finally, comments are made on averaging for a fixed number of Fermions and angular momentum
Full averaging of fuzzy impulsive differential inclusions
Directory of Open Access Journals (Sweden)
Natalia V. Skripnik
2010-09-01
Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Average beta-beating from random errors
Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department
2018-01-01
The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic eﬀect on the tune.
High Average Power Optical FEL Amplifiers
Ben-Zvi, I; Litvinenko, V
2005-01-01
Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...
Reliability Estimates for Undergraduate Grade Point Average
Westrick, Paul A.
2017-01-01
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
The average orbit system upgrade for the Brookhaven AGS
International Nuclear Information System (INIS)
Ciardullo, D.J.; Brennan, J.M.
1995-01-01
The flexibility of the AGS to accelerate protons, polarized protons and heavy ions requires average orbit instrumentation capable of performing over a wide range of beam intensity (10 9 to 6 x 10 13 charges) and accelerating frequency (1.7MHz to 4.5MHz). In addition, the system must be tolerant of dramatic changes in bunch shape, such as those occurring near transition. Reliability and maintenance issues preclude the use of active electronics within the high radiation environment of the AGS tunnel, prompting the use of remote bunch signal processing. The upgrade for the AGS Average Orbit system is divided into three areas: (1) a new Pick Up Electrode (PUE) signal delivery system; (2) new average orbit processing electronics; and (3) centralized peripheral and data acquisition hardware. A distributed processing architecture was chosen to minimize the PUE signal cable lengths, the group of four from each detector location being phase matched to within ±5 degree
Shubert, W. C.
1973-01-01
Transportation requirements are considered during the engine design layout reviews and maintenance engineering analyses. Where designs cannot be influenced to avoid transportation problems, the transportation representative is advised of the problems permitting remedies early in the program. The transportation representative will monitor and be involved in the shipment of development engine and GSE hardware between FRDC and vehicle manufacturing plant and thereby will be provided an early evaluation of the transportation plans, methods and procedures to be used in the space tug support program. Unanticipated problems discovered in the shipment of development hardware will be known early enough to permit changes in packaging designs and transportation plans before the start of production hardware and engine shipments. All conventional transport media can be used for the movement of space tug engines. However, truck transport is recommended for ready availability, variety of routes, short transit time, and low cost.
International Nuclear Information System (INIS)
Moore, James; Hays, David; Quinn, John; Johnson, Robert; Durham, Lisa
2013-01-01
As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program (FUSRAP) properties, Argonne National Laboratory (Argonne) assisted the U.S. Army Corps of Engineers (USACE) New York District by providing contaminated soil volume estimates for the main site area, much of which is fully or partially remediated. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations. The updating process yielded both a best guess estimate of contamination volumes and a conservative upper bound on the volume estimate that reflected the estimate's uncertainty. Comparison of model results to actual removed soil volumes was conducted on a parcel-by-parcel basis. Where sampling data density was adequate, the actual volume matched the model's average or best guess results. Where contamination was un-characterized and unknown to the model, the actual volume exceeded the model's conservative estimate. Factors affecting volume estimation were identified to assist in planning further excavations. (authors)
Effect of land area on average annual suburban water demand ...
African Journals Online (AJOL)
... values in the range between 4.4 kℓ∙d−1·ha−1 and 8.7 kℓ∙d−1·ha−1. The average demand was 10.4 kℓ∙d−1·ha−1 for calculation based on the residential area. The results are useful when crude estimates of AADD are required for planning new land developments. Keywords: urban water demand, suburb area, residential ...
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Statistics on exponential averaging of periodograms
International Nuclear Information System (INIS)
Peeters, T.T.J.M.; Ciftcioglu, Oe.
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
Average Annual Rainfall over the Globe
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
Energy Technology Data Exchange (ETDEWEB)
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
Technological progress and average job matching quality
Centeno, Mário; Corrêa, Márcio V.
2009-01-01
Our objective is to study, in a labor market characterized by search frictions, the effect of technological progress on the average quality of job matches. For that, we use an extension of Mortensen and Pissarides (1998) and obtain as results that the effects of technological progress on the labor market depend upon the initial conditions of the economy. If the economy is totally characterized by the presence of low-quality job matches, an increase in technological progress is accompanied by ...
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...
Weighted estimates for the averaging integral operator
Czech Academy of Sciences Publication Activity Database
Opic, Bohumír; Rákosník, Jiří
2010-01-01
Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231
Cumulative and Averaging Fission of Beliefs
Josang, Audun
2007-01-01
Belief fusion is the principle of combining separate beliefs or bodies of evidence originating from different sources. Depending on the situation to be modelled, different belief fusion methods can be applied. Cumulative and averaging belief fusion is defined for fusing opinions in subjective logic, and for fusing belief functions in general. The principle of fission is the opposite of fusion, namely to eliminate the contribution of a specific belief from an already fused belief, with the pur...
Energy Technology Data Exchange (ETDEWEB)
Beesing, M. E.; Buchholz, R. L.; Evans, R. A.; Jaminski, R. W.; Mathur, A. K.; Rausch, R. A.; Scarborough, S.; Smith, G. A.; Waldhauer, D. J.
1980-01-01
An investigation of the optical performance of a variety of concentrating solar collectors is reported. The study addresses two important issues: the accuracy of reflective or refractive surfaces required to achieve specified performance goals, and the effect of environmental exposure on the performance concentrators. To assess the importance of surface accuracy on optical performance, 11 tracking and nontracking concentrator designs were selected for detailed evaluation. Mathematical models were developed for each design and incorporated into a Monte Carlo ray trace computer program to carry out detailed calculations. Results for the 11 concentrators are presented in graphic form. The models and computer program are provided along with a user's manual. A survey data base was established on the effect of environmental exposure on the optical degradation of mirrors and lenses. Information on environmental and maintenance effects was found to be insufficient to permit specific recommendations for operating and maintenance procedures, but the available information is compiled and reported and does contain procedures that other workers have found useful.
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Chesapeake Bay Hypoxic Volume Forecasts and Results
Evans, Mary Anne; Scavia, Donald
2013-01-01
Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer's hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.
Average properties of bidisperse bubbly flows
Serrano-García, J. C.; Mendez-Díaz, S.; Zenit, R.
2018-03-01
Experiments were performed in a vertical channel to study the properties of a bubbly flow composed of two distinct bubble size species. Bubbles were produced using a capillary bank with tubes with two distinct inner diameters; the flow through each capillary size was controlled such that the amount of large or small bubbles could be controlled. Using water and water-glycerin mixtures, a wide range of Reynolds and Weber number ranges were investigated. The gas volume fraction ranged between 0.5% and 6%. The measurements of the mean bubble velocity of each species and the liquid velocity variance were obtained and contrasted with the monodisperse flows with equivalent gas volume fractions. We found that the bidispersity can induce a reduction of the mean bubble velocity of the large species; for the small size species, the bubble velocity can be increased, decreased, or remain unaffected depending of the flow conditions. The liquid velocity variance of the bidisperse flows is, in general, bound by the values of the small and large monodisperse values; interestingly, in some cases, the liquid velocity fluctuations can be larger than either monodisperse case. A simple model for the liquid agitation for bidisperse flows is proposed, with good agreement with the experimental measurements.
High average power diode pumped solid state lasers for CALIOPE
International Nuclear Information System (INIS)
Comaskey, B.; Halpin, J.; Moran, B.
1994-07-01
Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers
Amir, Sahar Z.
2018-01-02
The heterogeneous natures of rock fabrics, due to the existence of multi-scale fractures and geological formations, led to the deviations from unity in the flux-equations fractional-exponent magnitudes. In this paper, the resulting non-Newtonian non-Darcy fractional-derivatives flux equations are solved using physics-preserving averaging schemes that incorporates both, original and shifted, Grunwald-Letnikov (GL) approximation formulas preserving the physics, by reducing the shifting effects, while maintaining the stability of the system, by keeping one shifted expansion. The proposed way of using the GL expansions also generate symmetrical coefficient matrices that significantly reduces the discretization complexities appearing with all shifted cases from literature, and help considerably in 2D and 3D systems. Systems equations derivations and discretization details are discussed. Then, the physics-preserving averaging scheme is explained and illustrated. Finally, results are presented and reviewed. Edge-based original GL expansions are unstable as also illustrated in literatures. Shifted GL expansions are stable but add a lot of additional weights to both discretization sides affecting the physical accuracy. In comparison, the physics-preserving averaging scheme balances the physical accuracy and stability requirements leading to a more physically conservative scheme that is more stable than the original GL approximation but might be slightly less stable than the shifted GL approximations. It is a locally conservative Single-Continuum averaging scheme that applies a finite-volume viewpoint.
Multistage parallel-serial time averaging filters
International Nuclear Information System (INIS)
Theodosiou, G.E.
1980-01-01
Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)
Bootstrapping Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...
Fluctuations of wavefunctions about their classical average
International Nuclear Information System (INIS)
Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics
Average local ionization energy: A review.
Politzer, Peter; Murray, Jane S; Bulat, Felipe A
2010-11-01
The average local ionization energy I(r) is the energy necessary to remove an electron from the point r in the space of a system. Its lowest values reveal the locations of the least tightly-held electrons, and thus the favored sites for reaction with electrophiles or radicals. In this paper, we review the definition of I(r) and some of its key properties. Apart from its relevance to reactive behavior, I(r) has an important role in several fundamental areas, including atomic shell structure, electronegativity and local polarizability and hardness. All of these aspects of I(r) are discussed.
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis S.
2012-07-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.
Average Nuclear properties based on statistical model
International Nuclear Information System (INIS)
El-Jaick, L.J.
1974-01-01
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7. ...
The Effects of Cooperative Learning and Learner Control on High- and Average-Ability Students.
Hooper, Simon; And Others
1993-01-01
Describes a study that examined the effects of cooperative versus individual computer-based instruction on the performance of high- and average-ability fourth-grade students. Effects of learner and program control are investigated; student attitudes toward instructional content, learning in groups, and partners are discussed; and further research…
Asymptotic Time Averages and Frequency Distributions
Directory of Open Access Journals (Sweden)
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Group averaging for de Sitter free fields
Energy Technology Data Exchange (ETDEWEB)
Marolf, Donald; Morrison, Ian A, E-mail: marolf@physics.ucsb.ed, E-mail: ian_morrison@physics.ucsb.ed [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)
2009-12-07
Perturbative gravity about global de Sitter space is subject to linearization-stability constraints. Such constraints imply that quantum states of matter fields couple consistently to gravity only if the matter state has vanishing de Sitter charges, i.e. only if the state is invariant under the symmetries of de Sitter space. As noted by Higuchi, the usual Fock spaces for matter fields contain no de Sitter-invariant states except the vacuum, though a new Hilbert space of de Sitter-invariant states can be constructed via so-called group-averaging techniques. We study this construction for free scalar fields of arbitrary positive mass in any dimension, and for linear vector and tensor gauge fields in any dimension. Our main result is to show in each case that group averaging converges for states containing a sufficient number of particles. We consider general N-particle states with smooth wavefunctions, though we obtain somewhat stronger results when the wavefunctions are finite linear combinations of de Sitter harmonics. Along the way we obtain explicit expressions for general boost matrix elements in a familiar basis.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Global atmospheric circulation statistics: Four year averages
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
Bounding quantum gate error rate based on reported average fidelity
International Nuclear Information System (INIS)
Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C
2016-01-01
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)
High Average Power, High Energy Short Pulse Fiber Laser System
Energy Technology Data Exchange (ETDEWEB)
Messerly, M J
2007-11-13
Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.
Hourdakis, C J
2011-04-07
The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, Ū(P), the average, Ū, the effective, U(eff) or the maximum peak, U(P) tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average Ū or the average peak, Ū(p) voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k(PPV,kVp) and the average k(PPV,Uav) conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated-according to the proposed method-PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from Ū(p) and Ū measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions ... The next section contains a more or less detailed historical review of the an- tecedents, birth, and hazardous life of the conjecture. ... half the history of the model. It satisfies the stronger energy requirements (the.
Changing mortality and average cohort life expectancy
DEFF Research Database (Denmark)
Schoen, Robert; Canudas-Romo, Vladimir
2005-01-01
measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time....
Angle-averaged Compton cross sections
Energy Technology Data Exchange (ETDEWEB)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
Reynolds averaged simulation of unsteady separated flow
International Nuclear Information System (INIS)
Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.
2003-01-01
The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation
FEL system with homogeneous average output
Energy Technology Data Exchange (ETDEWEB)
Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph
2018-01-16
A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Angle-averaged Compton cross sections
International Nuclear Information System (INIS)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
A collisional-radiative average atom model for hot plasmas
International Nuclear Information System (INIS)
Rozsnyai, B.F.
1996-01-01
A collisional-radiative 'average atom' (AA) model is presented for the calculation of opacities of hot plasmas not in the condition of local thermodynamic equilibrium (LTE). The electron impact and radiative rate constants are calculated using the dipole oscillator strengths of the average atom. A key element of the model is the photon escape probability which at present is calculated for a semi infinite slab. The Fermi statistics renders the rate equation for the AA level occupancies nonlinear, which requires iterations until the steady state. AA level occupancies are found. Detailed electronic configurations are built into the model after the self-consistent non-LTE AA state is found. The model shows a continuous transition from the non-LTE to the LTE state depending on the optical thickness of the plasma. 22 refs., 13 figs., 1 tab
Microchannel heatsinks for high-average-power laser diode arrays
Benett, William J.; Freitas, Barry L.; Beach, Raymond J.; Ciarlo, Dino R.; Sperry, Verry; Comaskey, Brian J.; Emanuel, Mark A.; Solarz, Richard W.; Mundinger, David C.
1992-06-01
Detailed performance results and fabrication techniques for an efficient and low thermal impedance laser diode array heatsink are presented. High duty factor or even CW operation of fully filled laser diode arrays is enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using a photolithographic pattern definition procedure followed by anisotropic chemical etching. A modular rack-and-stack architecture is adopted for the heatsink design allowing arbitrarily large two-dimensional arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel cooled heatsinks is ideally suited to pump array requirements for high average power crystalline lasers because of the stringent temperature demands that result from coupling the diode light to several nanometers wide absorption features characteristic of lasing ions in crystals.
Aircraft Simulator Data Requirements Study. Volume II
1977-01-01
the adjustments to ’rrect current problems may mask serious simulator problems that will emerge in later use of the simulator. It was universally...8217 .:U eVlI !t -’k .lct Io t.t 5 ’h ~ re < ’ t h), * I ’t: ’’"njf of ’ -.it .1 t ai’ltO v- oI t Willi I Ik ~i-±a or tIt aIslrc t’it fitIt.r 1v iiC (via1...operable equipment Environment Flight envelope and maneuvers Scanning area 57Awl -4 Instructor stations - emergencies Mission phases (T.O., cruise
Technological progress and average job matching quality
Directory of Open Access Journals (Sweden)
Mário Centeno
2009-12-01
Full Text Available Our objective is to study, in a labor market characterized by search frictions, the effect of technological progress on the average quality of job matches. For that, we use an extension of Mortensen and Pissarides (1998 and obtain as results that the effects of technological progress on the labor market depend upon the initial conditions of the economy. If the economy is totally characterized by the presence of low-quality job matches, an increase in technological progress is accompanied by an increase in the quality of jobs. In turn, if the economy is totally characterized by the presence of high-quality job matches, an increase in the technological progress rate implies the reverse effect. Finally, if the economy is totally characterized by the presence of very high-quality jobs, an increase in the technological progress rate implies an increase in the average quality of the job matches.O objetivo deste artigo é o de estudar, em um mercado de trabalho caracterizado por fricções, os efeitos do progresso tecnológico sobre a qualidade média das parcerias produtivas. Para tal, utilizamos uma extensão do modelo de Mortensen and Pissarides (1998 e obtivemos, como resultados, que os efeitos de variações na taxa de progresso tecnológico sobre o mercado de trabalho dependerão das condições da economia. Se a economia for totalmente caracterizada pela presença de parcerias produtivas de baixa qualidade, um aumento na taxa de progresso tecnológico vem acompanhado por um aumento na qualidade médias das parcerias produtivas. Por sua vez, se a economia for totalmente caracterizada pela presença de parcerias produtivas de alta qualidade, um aumento na taxa de progresso tecnológico gera um efeito inverso. Finalmente, se a economia for totalmente caracterizada pela presença de parcerias produtivas de muito alta qualidade, um aumento na taxa de progresso tecnológico virá acompanhado de uma elevação na qualidade média dos empregos.
On spectral averages in nuclear spectroscopy
International Nuclear Information System (INIS)
Verbaarschot, J.J.M.
1982-01-01
In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)
Modelling lidar volume-averaging and its significance to wind turbine wake measurements
DEFF Research Database (Denmark)
Meyer Forsting, Alexander Raul; Troldborg, Niels; Borraccino, Antoine
2017-01-01
gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination...
Using four-phase Eulerian volume averaging approach to model macrosegregation and shrinkage cavity
Wu, M.; Kharicha, A.; Ludwig, A.
2015-06-01
This work is to extend a previous 3-phase mixed columnar-equiaxed solidification model to treat the formation of shrinkage cavity by including an additional phase. In the previous model the mixed columnar and equiaxed solidification with consideration of multiphase transport phenomena (mass, momentum, species and enthalpy) is proposed to calculate the as- cast structure including columnar-to-equiaxed transition (CET) and formation of macrosegregation. In order to incorporate the formation of shrinkage cavity, an additional phase, i.e. gas phase or covering liquid slag phase, must be considered in addition to the previously introduced 3 phases (parent melt, solidifying columnar dendrite trunks and equiaxed grains). No mass and species transfer between the new and other 3 phases is necessary, but the treatment of the momentum and energy exchanges between them is crucially important for the formation of free surface and shrinkage cavity, which in turn influences the flow field and formation of segregation. A steel ingot is preliminarily calculated to exam the functionalities of the model.
Actuator disk model of wind farms based on the rotor average wind speed
DEFF Research Database (Denmark)
Han, Xing Xing; Xu, Chang; Liu, De You
2016-01-01
Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition...
McMahon, Troy
2015-05-01
© 2015 IEEE. Reachable volumes are a new technique that allows one to efficiently restrict sampling to feasible/reachable regions of the planning space even for high degree of freedom and highly constrained problems. However, they have so far only been applied to graph-based sampling-based planners. In this paper we develop the methodology to apply reachable volumes to tree-based planners such as Rapidly-Exploring Random Trees (RRTs). In particular, we propose a reachable volume RRT called RVRRT that can solve high degree of freedom problems and problems with constraints. To do so, we develop a reachable volume stepping function, a reachable volume expand function, and a distance metric based on these operations. We also present a reachable volume local planner to ensure that local paths satisfy constraints for methods such as PRMs. We show experimentally that RVRRTs can solve constrained problems with as many as 64 degrees of freedom and unconstrained problems with as many as 134 degrees of freedom. RVRRTs can solve problems more efficiently than existing methods, requiring fewer nodes and collision detection calls. We also show that it is capable of solving difficult problems that existing methods cannot.
Spinal cord imaging using averaged magnetization inversion recovery acquisitions.
Weigel, Matthias; Bieri, Oliver
2018-04-01
To establish a novel approach for fast high-resolution spinal cord (SC) imaging using averaged magnetization inversion recovery acquisitions (AMIRA). The AMIRA concept is based on an inversion recovery (IR) prepared, segmented, and time-limited cine balanced steady state free precession sequence. Typically, for the fastest SC imaging without any signal averaging, eight consecutive images in time with an in-plane resolution of 0.67 × 0.67 mm 2 and 6 mm to 8 mm slice thickness are acquired in 51 s. AMIRA does not require parallel acquisition techniques. AMIRA measures eight images of remarkable tissue contrast variation between spinal cord gray (GM) and white matter (WM) and cerebrospinal fluid (CSF). Following the AMIRA concept, averaging the first IR contrast images not only improves the signal-to-noise ratio but also offers a surprising enhancement of the contrast-to-noise ratio between GM and WM, whereas averaging the last images considerably improves the contrast-to-noise ratio between WM and CSF. These observations are supported by quantitative data. The AMIRA concept provides 2D spinal cord imaging with multiple tissue contrasts and enhanced contrast-to-noise ratios with a typical 0.67 × 0.67 mm 2 in-plane resolution and a slice thickness between 4 mm and 8 mm acquired in only 1 to 2 min per slice. Magn Reson Med 79:1870-1881, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Dose calculation with respiration-averaged CT processed from cine CT without a respiratory surrogate
International Nuclear Information System (INIS)
Riegel, Adam C.; Ahmad, Moiz; Sun Xiaojun; Pan Tinsu
2008-01-01
Dose calculation for thoracic radiotherapy is commonly performed on a free-breathing helical CT despite artifacts caused by respiratory motion. Four-dimensional computed tomography (4D-CT) is one method to incorporate motion information into the treatment planning process. Some centers now use the respiration-averaged CT (RACT), the pixel-by-pixel average of the ten phases of 4D-CT, for dose calculation. This method, while sparing the tedious task of 4D dose calculation, still requires 4D-CT technology. The authors have recently developed a means to reconstruct RACT directly from unsorted cine CT data from which 4D-CT is formed, bypassing the need for a respiratory surrogate. Using RACT from cine CT for dose calculation may be a means to incorporate motion information into dose calculation without performing 4D-CT. The purpose of this study was to determine if RACT from cine CT can be substituted for RACT from 4D-CT for the purposes of dose calculation, and if increasing the cine duration can decrease differences between the dose distributions. Cine CT data and corresponding 4D-CT simulations for 23 patients with at least two breathing cycles per cine duration were retrieved. RACT was generated four ways: First from ten phases of 4D-CT, second, from 1 breathing cycle of images, third, from 1.5 breathing cycles of images, and fourth, from 2 breathing cycles of images. The clinical treatment plan was transferred to each RACT and dose was recalculated. Dose planes were exported at orthogonal planes through the isocenter (coronal, sagittal, and transverse orientations). The resulting dose distributions were compared using the gamma (γ) index within the planning target volume (PTV). Failure criteria were set to 2%/1 mm. A follow-up study with 50 additional lung cancer patients was performed to increase sample size. The same dose recalculation and analysis was performed. In the primary patient group, 22 of 23 patients had 100% of points within the PTV pass γ criteria
Aarthi, G.; Ramachandra Reddy, G.
2018-03-01
In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.
Design of respiration averaged CT for attenuation correction of the PET data from PET/CT
International Nuclear Information System (INIS)
Chi, Pai-Chun Melinda; Mawlawi, Osama; Nehmeh, Sadek A.; Erdi, Yusuf E.; Balter, Peter A.; Luo, Dershan; Mohan, Radhe; Pan Tinsu
2007-01-01
Our previous patient studies have shown that the use of respiration averaged computed tomography (ACT) for attenuation correction of the positron emission tomography (PET) data from PET/CT reduces the potential misalignment in the thorax region by matching the temporal resolution of the CT to that of the PET. In the present work, we investigated other approaches of acquiring ACT in order to reduce the CT dose and to improve the ease of clinical implementation. Four-dimensional CT (4DCT) data sets for ten patients (17 lung/esophageal tumors) were acquired in the thoracic region immediately after the routine PET/CT scan. For each patient, multiple sets of ACTs were generated based on both phase image averaging (phase approach) and fixed cine duration image averaging (cine approach). In the phase approach, the ACTs were calculated from CT images corresponding to the significant phases of the respiratory cycle: ACT 050phs from end-inspiration (0%) and end-expiration (50%), ACT 2070phs from mid-inspiration (20%) and mid-expiration (70%), ACT 4phs from 0%, 20%, 50% and 70%, and ACT 10phs from all ten phases, which was the original approach. In the cine approach, which does not require 4DCT, the ACTs were calculated based on the cine images from cine durations of 1 to 6 s at 1 s increments. PET emission data for each patient were attenuation corrected with each of the above mentioned ACTs and the tumor maximum standard uptake value (SUV max ), average SUV (SUV avg ), and tumor volume measurements were compared. Percent differences were calculated between PET data corrected with various ACTs and that corrected with ACT 10phs . In the phase approach, the ACT 10phs can be approximated by the ACT 4phs to within a mean percent difference of 2% in SUV and tumor volume measurements. In cine approach, ACT 10phs can be approximated to within a mean percent difference of 3% by ACTs computed from cine durations ≥3 s. Acquiring CT images only at the four significant phases for the
Gover, A. Rod; Waldron, Andrew
2017-09-01
We develop a universal distributional calculus for regulated volumes of metrics that are suitably singular along hypersurfaces. When the hypersurface is a conformal infinity we give simple integrated distribution expressions for the divergences and anomaly of the regulated volume functional valid for any choice of regulator. For closed hypersurfaces or conformally compact geometries, methods from a previously developed boundary calculus for conformally compact manifolds can be applied to give explicit holographic formulæ for the divergences and anomaly expressed as hypersurface integrals over local quantities (the method also extends to non-closed hypersurfaces). The resulting anomaly does not depend on any particular choice of regulator, while the regulator dependence of the divergences is precisely captured by these formulæ. Conformal hypersurface invariants can be studied by demanding that the singular metric obey, smoothly and formally to a suitable order, a Yamabe type problem with boundary data along the conformal infinity. We prove that the volume anomaly for these singular Yamabe solutions is a conformally invariant integral of a local Q-curvature that generalizes the Branson Q-curvature by including data of the embedding. In each dimension this canonically defines a higher dimensional generalization of the Willmore energy/rigid string action. Recently, Graham proved that the first variation of the volume anomaly recovers the density obstructing smooth solutions to this singular Yamabe problem; we give a new proof of this result employing our boundary calculus. Physical applications of our results include studies of quantum corrections to entanglement entropies.
Waste Management System Requirement document
International Nuclear Information System (INIS)
1990-04-01
This volume defines the top level technical requirements for the Monitored Retrievable Storage (MRS) facility. It is designed to be used in conjunction with Volume 1, General System Requirements. Volume 3 provides a functional description expanding the requirements allocated to the MRS facility in Volume 1 and, when appropriate, elaborates on requirements by providing associated performance criteria. Volumes 1 and 3 together convey a minimum set of requirements that must be satisfied by the final MRS facility design without unduly constraining individual design efforts. The requirements are derived from the Nuclear Waste Policy Act of 1982 (NWPA), the Nuclear Waste Policy Amendments Act of 1987 (NWPAA), the Environmental Protection Agency's (EPA) Environmental Standards for the Management and Disposal of Spent Nuclear Fuel (40 CFR 191), NRC Licensing Requirements for the Independent Storage of Spent Nuclear and High-Level Radioactive Waste (10 CFR 72), and other federal statutory and regulatory requirements, and major program policy decisions. This document sets forth specific requirements that will be fulfilled. Each subsequent level of the technical document hierarchy will be significantly more detailed and provide further guidance and definition as to how each of these requirements will be implemented in the design. Requirements appearing in Volume 3 are traceable into the MRS Design Requirements Document. Section 2 of this volume provides a functional breakdown for the MRS facility. 1 tab
Jongschaap, R.J.J.
1987-01-01
The so-called generalized Kramers-Kirkwood expression for the average stress tensor of a system of interacting point particles, derived by Bird and Curtiss on using a phase-space-kinetic formalism has been reconsidered from different points of view. First a derivation based upon volume averaging is
Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry
de Kat, Roeland
2015-11-01
Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.
Site Averaged Neutron Soil Moisture: 1987-1989 (Betts)
National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the neutron probe soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...
Site Averaged Gravimetric Soil Moisture: 1987-1989 (Betts)
National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...
Site Averaged Gravimetric Soil Moisture: 1987-1989 (Betts)
National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...
Low Average Sidelobe Slot Array Antennas for Radiometer Applications
Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.
2012-01-01
In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E
Human perceptions of colour rendition vary with average fidelity, average gamut, and gamut shape
Energy Technology Data Exchange (ETDEWEB)
Royer, MP [Pacific Northwest National Laboratory, Portland, OR, USA; Wilkerson, A. [Pacific Northwest National Laboratory, Portland, OR, USA; Wei, M. [The Hong Kong Polytechnic University, Hong Kong, China; Houser, K. [The Pennsylvania State University, University Park, PA, USA; Davis, R. [Pacific Northwest National Laboratory, Portland, OR, USA
2016-08-10
An experiment was conducted to evaluate how subjective impressions of color quality vary with changes in average fidelity, average gamut, and gamut shape (which considers the specific hues that are saturated or desaturated). Twenty-eight participants each evaluated 26 lighting conditions—created using four, seven-channel, tunable LED luminaires—in a 3.1 m by 3.7 m room filled with objects selected to cover a range of hue, saturation, and lightness. IES TM-30 fidelity index (Rf) values ranged from 64 to 93, IES TM-30 gamut index (Rg¬) values from 79 to 117, and IES TM-30 Rcs,h1 values (a proxy for gamut shape) from -19% to 26%. All lighting conditions delivered the same nominal illuminance and chromaticity. Participants were asked to rate each condition on eight point semantic differential scales for saturated-dull, normal-shifted, and like-dislike. They were also asked one multiple choice question, classifying the condition as saturated, dull, normal, or shifted. The findings suggest that gamut shape is more important than average gamut for human preference, where reds play a more important role than other hues. Additionally, average fidelity alone is a poor predictor of human perceptions, although Rf was somewhat better than CIE Ra. The most preferred source had a CIE Ra value of 68, and 9 of the top 12 rated products had a CIE Ra value of 73 or less, which indicates that the commonly used criteria of CIE Ra ≥ 80 may be excluding a majority of preferred light sources.
Potential of high-average-power solid state lasers
International Nuclear Information System (INIS)
Emmett, J.L.; Krupke, W.F.; Sooy, W.R.
1984-01-01
We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels
Potential of high-average-power solid state lasers
Energy Technology Data Exchange (ETDEWEB)
Emmett, J.L.; Krupke, W.F.; Sooy, W.R.
1984-09-25
We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.
The visual system discounts emotional deviants when extracting average expression.
Haberman, Jason; Whitney, David
2010-10-01
There has been a recent surge in the study of ensemble coding, the idea that the visual system represents a set of similar items using summary statistics (Alvarez & Oliva, 2008; Ariely, 2001; Chong & Treisman, 2003; Parkes, Lund, Angelucci, Solomon, & Morgan, 2001). We previously demonstrated that this ability extends to faces and thus requires a high level of object processing (Haberman & Whitney, 2007, 2009). Recent debate has centered on the nature of the summary representation of size (e.g., Myczek & Simons, 2008) and whether the perceived average simply reflects the sampling of a very small subset of the items in a set. In the present study, we explored this further in the context of faces, asking observers to judge the average expressions of sets of faces containing emotional outliers. Our results suggest that the visual system implicitly and unintentionally discounts the emotional outliers, thereby computing a summary representation that encompasses the vast majority of the information present. Additional computational modeling and behavioral results reveal that an intentional cognitive sampling strategy does not accurately capture observer performance. Observers derive precise ensemble information given a 250-msec exposure, suggesting a rapid and flexible system not bound by the limits of serial attention.
Global robust image rotation from combined weighted averaging
Reich, Martin; Yang, Michael Ying; Heipke, Christian
2017-05-01
In this paper we present a novel rotation averaging scheme as part of our global image orientation model. This model is based on homologous points in overlapping images and is robust against outliers. It is applicable to various kinds of image data and provides accurate initializations for a subsequent bundle adjustment. The computation of global rotations is a combined optimization scheme: First, rotations are estimated in a convex relaxed semidefinite program. Rotations are required to be in the convex hull of the rotation group SO (3) , which in most cases leads to correct rotations. Second, the estimation is improved in an iterative least squares optimization in the Lie algebra of SO (3) . In order to deal with outliers in the relative rotations, we developed a sequential graph optimization algorithm that is able to detect and eliminate incorrect rotations. From the beginning, we propagate covariance information which allows for a weighting in the least squares estimation. We evaluate our approach using both synthetic and real image datasets. Compared to recent state-of-the-art rotation averaging and global image orientation algorithms, our proposed scheme reaches a high degree of robustness and accuracy. Moreover, it is also applicable to large Internet datasets, which shows its efficiency.
Orbit-averaged Darwin quasi-neutral hybrid code
International Nuclear Information System (INIS)
Zachary, A.L.; Cohen, B.I.
1986-01-01
We have developed an orbit-averaged Darwin quasi-neutral hydbrid code to study the in situ acceleration of cosmic ray by supernova-remnant shock waves. The orbit-averaged alogorithm is well suited to following the slow growth of Alfven wave driven by resonances with rapidly gyrating cosmic rays. We present a complete description of our algorithm, along with stability and noise analyses. The code is numerically unstable, but a single e-folding may require as many as 10 5 time-steps! It can therefore be used to study instabilities for which /sub physical/> Γ/sub n//sub u//sub m//sub e//sub r//sub i//sub c//sub a//sub l/, provided that Γ/sub n//sub u//sub m//sub e//sub r//sub i//sub c//sub a//sub l/ tau /sup f//sup i//sup n//sup a//sup l/< O(1). We also analyze a physical instability which provides a successful test of our algorithm
Dynamic logistic regression and dynamic model averaging for binary classification.
McCormick, Tyler H; Raftery, Adrian E; Madigan, David; Burd, Randall S
2012-03-01
We propose an online binary classification procedure for cases when there is uncertainty about the model to use and parameters within a model change over time. We account for model uncertainty through dynamic model averaging, a dynamic extension of Bayesian model averaging in which posterior model probabilities may also change with time. We apply a state-space model to the parameters of each model and we allow the data-generating model to change over time according to a Markov chain. Calibrating a "forgetting" factor accommodates different levels of change in the data-generating mechanism. We propose an algorithm that adjusts the level of forgetting in an online fashion using the posterior predictive distribution, and so accommodates various levels of change at different times. We apply our method to data from children with appendicitis who receive either a traditional (open) appendectomy or a laparoscopic procedure. Factors associated with which children receive a particular type of procedure changed substantially over the 7 years of data collection, a feature that is not captured using standard regression modeling. Because our procedure can be implemented completely online, future data collection for similar studies would require storing sensitive patient information only temporarily, reducing the risk of a breach of confidentiality. © 2011, The International Biometric Society.
20 CFR 226.62 - Computing average monthly compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...
Ultra-low noise miniaturized neural amplifier with hardware averaging.
Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M
2015-08-01
Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This
Ultra-low noise miniaturized neural amplifier with hardware averaging
Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.
2015-08-01
Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and
2010-07-01
.... (i) An estimate of the average daily volume (in gallons) of gasoline produced at each refinery. This... requirements for the gasoline benzene program? 80.1352 Section 80.1352 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline...
Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing
2017-09-01
The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.
Directory of Open Access Journals (Sweden)
Samer Salamekh
Full Text Available Analyze inter-fraction volumetric changes of lung tumors treated with stereotactic body radiation therapy (SBRT and determine if the volume changes during treatment can be predicted and thus considered in treatment planning.Kilo-voltage cone-beam CT (kV-CBCT images obtained immediately prior to each fraction were used to monitor inter-fraction volumetric changes of 15 consecutive patients (18 lung nodules treated with lung SBRT at our institution (45-54 Gy in 3-5 fractions in the year of 2011-2012. Spearman's (ρ correlation and Spearman's partial correlation analysis was performed with respect to patient/tumor and treatment characteristics. Multiple hypothesis correction was performed using False Discovery Rate (FDR and q-values were reported.All tumors studied experienced volume change during treatment. Tumor increased in volume by an average of 15% and regressed by an average of 11%. The overall volume increase during treatment is contained within the planning target volume (PTV for all tumors. Larger tumors increased in volume more than smaller tumors during treatment (q = 0.0029. The volume increase on CBCT was correlated to the treatment planning gross target volume (GTV as well as internal target volumes (ITV (q = 0.0085 and q = 0.0039 respectively and could be predicted for tumors with a GTV less than 22 mL. The volume increase was correlated to the integral dose (ID in the ITV at every fraction (q = 0.0049. The peak inter-fraction volume occurred at an earlier fraction in younger patients (q = 0.0122.We introduced a new analysis method to follow inter-fraction tumor volume changes and determined that the observed changes during lung SBRT treatment are correlated to the initial tumor volume, integral dose (ID, and patient age. Furthermore, the volume increase during treatment of tumors less than 22mL can be predicted during treatment planning. The volume increase remained significantly less than the overall PTV expansion, and radiation
On the evaluation of Hardy's thermomechanical quantities using ensemble and time averaging
International Nuclear Information System (INIS)
Fu, Yao; To, Albert C
2013-01-01
An ensemble averaging approach was investigated for its accuracy and convergence against time averaging in computing continuum quantities such as stress, heat flux and temperature from atomistic scale quantities. For this purpose, ensemble averaging and time averaging were applied to evaluate Hardy's thermomechanical expressions (Hardy 1982 J. Chem. Phys. 76 622–8) in equilibrium conditions at two different temperatures as well as a nonequilibrium process due to shock impact on a Ni crystal modeled using molecular dynamics simulations. It was found that under equilibrium conditions, time averaging requires selection of a time interval larger than the critical time interval to obtain convergence, where the critical time interval can be estimated using the elastic properties of the material. The reason for this is because of the significant correlations among the computed thermomechanical quantities at different time instants employed in computing their time average. On the other hand, the computed thermomechanical quantities from different realizations in ensemble averaging are statistically independent, and thus convergence is always guaranteed. The computed stress, heat flux and temperature show noticeable difference in their convergence behavior while their confidence intervals increase with temperature. Contrary to equilibrium settings, time averaging is not equivalent to ensemble averaging in the case of shock wave propagation. Time averaging was shown to have poor performance in computing various thermomechanical fields by either oversmoothing the fields or failing to remove noise. (paper)
Czech Academy of Sciences Publication Activity Database
Raftery, A. E.; Kárný, Miroslav; Ettler, P.
Volume 52, Number 1 (2010), s. 52-66 ISSN 0040-1706 R&D Projects: GA MŠk 1M0572; GA MŠk(CZ) 7D09008 Institutional research plan: CEZ:AV0Z10750506 Keywords : prediction * rolling mills * Bayesian Dynamic Averaging Subject RIV: BC - Control Systems Theory Impact factor: 1.560, year: 2010 http://library.utia.cas.cz/separaty/2010/AS/karny-0342595.pdf
Averaging in SU(2) open quantum random walk
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
Stochastic Simulation of Hourly Average Wind Speed in Umudike ...
African Journals Online (AJOL)
Ten years of hourly average wind speed data were used to build a seasonal autoregressive integrated moving average (SARIMA) model. The model was used to simulate hourly average wind speed and recommend possible uses at Umudike, South eastern Nigeria. Results showed that the simulated wind behaviour was ...
Average Weekly Alcohol Consumption: Drinking Percentiles for American College Students.
Meilman, Philip W.; And Others
1997-01-01
Reports a study that examined the average number of alcoholic drinks that college students (N=44,433) consumed per week. Surveys indicated that most students drank little or no alcohol on an average weekly basis. Only about 10% of the students reported consuming an average of 15 drinks or more per week. (SM)
Averaging in SU(2) open quantum random walk
International Nuclear Information System (INIS)
Ampadu Clement
2014-01-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT
Mosna, Ricardo A.; Saa, Alberto
2005-11-01
We reexamine here the issue of consistency of minimal action formulation with the minimal coupling procedure (MCP) in spaces with torsion. In Riemann-Cartan spaces, it is known that a proper use of the MCP requires that the trace of the torsion tensor be a gradient, Tμ=∂μθ, and that the modified volume element τθ=eθ√g dx1∧⋯∧dxn be used in the action formulation of a physical model. We rederive this result here under considerably weaker assumptions, reinforcing some recent results about the inadequacy of propagating torsion theories of gravity to explain the available observational data. The results presented here also open the door to possible applications of the modified volume element in the geometric theory of crystalline defects.
Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems
Tobasco, Ian; Goluskin, David; Doering, Charles R.
2018-02-01
For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.
Self-collimation and focusing effects in zero-average index metamaterials.
Pollès, Rémi; Centeno, Emmanuel; Arlandis, Julien; Moreau, Antoine
2011-03-28
One dimensional photonic crystals combining positive and negative index layers have shown to present a photonic band gap insensitive to the period scaling when the volume average index vanishes. Defect modes lying in this zero-n gap can in addition be obtained without locally breaking the symmetry of the crystal lattice. In this work, index dispersion is shown to broaden the resonant frequencies creating then a conduction band lying inside the zero-n gap. Self-collimation and focusing effects are in addition demonstrated in zero-average index metamaterials supporting defect modes. This beam shaping is explained in the framework of a beam propagation model by introducing an harmonic average index parameter.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Energy Technology Data Exchange (ETDEWEB)
Page, G.B.
1980-04-01
The report contained in this volume considers the availability of electric power to supply uranium mines and mills. The report, submited to Sandia Laboratories by the New Mexico Department of Energy and Minerals (EMD), is reproduced without modification. The state concludes that the supply of power, including natural gas-fueled production, will not constrain uranium production.
International Nuclear Information System (INIS)
Page, G.B.
1980-04-01
The report contained in this volume considers the availability of electric power to supply uranium mines and mills. The report, submited to Sandia Laboratories by the New Mexico Department of Energy and Minerals (EMD), is reproduced without modification. The state concludes that the supply of power, including natural gas-fueled production, will not constrain uranium production
Gray, William G; Miller, Cass T
2010-12-01
This work is the eighth in a series that develops the fundamental aspects of the thermodynamically constrained averaging theory (TCAT) that allows for a systematic increase in the scale at which multiphase transport phenomena is modeled in porous medium systems. In these systems, the explicit locations of interfaces between phases and common curves, where three or more interfaces meet, are not considered at scales above the microscale. Rather, the densities of these quantities arise as areas per volume or length per volume. Modeling of the dynamics of these measures is an important challenge for robust models of flow and transport phenomena in porous medium systems, as the extent of these regions can have important implications for mass, momentum, and energy transport between and among phases, and formulation of a capillary pressure relation with minimal hysteresis. These densities do not exist at the microscale, where the interfaces and common curves correspond to particular locations. Therefore, it is necessary for a well-developed macroscale theory to provide evolution equations that describe the dynamics of interface and common curve densities. Here we point out the challenges and pitfalls in producing such evolution equations, develop a set of such equations based on averaging theorems, and identify the terms that require particular attention in experimental and computational efforts to parameterize the equations. We use the evolution equations developed to specify a closed two-fluid-phase flow model.
Averaging and sampling for magnetic-observatory hourly data
Directory of Open Access Journals (Sweden)
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
RADTRAN 4: User guide. Volume 3
Energy Technology Data Exchange (ETDEWEB)
Neuhauser, K S [Sandia National Labs., Albuquerque, NM (United States); Kanipe, F L [GRAM, Inc., Albuquerque, NM (United States)
1992-01-01
RADTRAN 4 is used to evaluate radiological consequences of incident-free transportation, as well as the radiological risks from vehicular accidents occurring during transportation. This User Guide is Volume 3 in a series of four volume of the documentation of the RADTRAN 4 computer code for transportation risk analysis. The other three volumes are Volume 1, the Executive Summary; Volume 2, the Technical Manual; and Volume 4, the Programmer`s Manual. The theoretical and calculational basis for the operations performed by RADTRAN 4 are discussed in Volume 2. Throughout this User Guide the reader will be referred to Volume 2 for detailed discussions of certain RADTRAN features. This User Guide supersedes the document ``RADTRAN III`` by Madsen et al. (1983). This RADTRAN 4 User Guide specifies and describes the required data, control inputs, input sequences, user options, program limitations, and other activities necessary for execution of the RADTRAN 4 computer code.
Mahmud, A. A.; Hixson, M.; Zhao, Z.; Chen, S.; Kleeman, M. J.
2009-12-01
Climate change will transform meteorological patterns with unknown consequences for air quality in California. California’s extreme topography requires higher spatial resolution for climate-air quality studies compared to other regions of the United States. At the same time, the 7-year ENSO cycle requires long analysis periods in order to quantify climate impacts. The combination of these challenges results in a computationally intensive modeling problem that limits our ability to fully analyze climate impacts on California air quality. One possible approach to reduce this computational burden is to average several years of meteorological fields and then use these average inputs in a single set of air quality runs. The interactions between meteorology and air quality are non-linear, and so the averaging approach may introduce biases that need to be quantified. The objective of this research is to evaluate how upstream averaging of meteorological fields over several years influences air quality predictions in California. Hourly meteorological fields will be averaged over 7-years in the present-day (2000-2006) and the future (2047-2053). The meteorology for each period was down-scaled using the Weather Research Forecast (WRF) from the business-as-usual output generated by the Parallel Climate Model (PCM). Emissions of biogenic and mobile-source volatile organic carbons (VOC) will be processed using meteorological fields from individual years, and using the averaged meteorological data. The UCD source-oriented photochemical air quality model will be employed to study the global climate change effects on the annual average concentrations of fine particulate matter (PM2.5) throughout the entire state of California. The model predicts the size and composition distribution of airborne particulate matter in 15 size bins spanning the diameter range from 10nm - 10µm. The modeled concentrations from individual years will be averaged and compared with the concentrations
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell Barrera
2014-12-31
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Dictionary Based Segmentation in Volumes
DEFF Research Database (Denmark)
Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley
2015-01-01
We present a method for supervised volumetric segmentation based on a dictionary of small cubes composed of pairs of intensity and label cubes. Intensity cubes are small image volumes where each voxel contains an image intensity. Label cubes are volumes with voxelwise probabilities for a given...... label. The segmentation process is done by matching a cube from the volume, of the same size as the dictionary intensity cubes, to the most similar intensity dictionary cube, and from the associated label cube we get voxel-wise label probabilities. Probabilities from overlapping cubes are averaged...... and hereby we obtain a robust label probability encoding. The dictionary is computed from labeled volumetric image data based on weighted clustering. We experimentally demonstrate our method using two data sets from material science – a phantom data set of a solid oxide fuel cell simulation for detecting...
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
A novel approach for the averaging of magnetocardiographically recorded heart beats
Energy Technology Data Exchange (ETDEWEB)
DiPietroPaolo, D [Advanced Technologies Biomagnetics, Pescara (Italy); Mueller, H-P [Division for Biosignals and Imaging Technologies, Central Institute for Biomedical Engineering, Ulm University, D-89069 Ulm (Germany); Erne, S N [Division for Biosignals and Imaging Technologies, Central Institute for Biomedical Engineering, Ulm University, D-89069 Ulm (Germany)
2005-05-21
Performing signal averaging in an efficient and correct way is indispensable since it is a prerequisite for a broad variety of magnetocardiographic (MCG) analysis methods. One of the most common procedures for performing the signal averaging to increase the signal-to-noise ratio (SNR) in magnetocardiography, as well as in electrocardiography (ECG), is done by means of spatial or temporal techniques. In this paper, an improvement of the temporal averaging method is presented. In order to obtain an accurate signal detection, temporal alignment methods and objective classification criteria are developed. The processing technique based on hierarchical clustering is introduced to take into account the non-stationarity of the noise and, to some extent, the biological variability of the signals reaching the optimum SNR. The method implemented is especially designed to run fast and does not require any interaction from the operator. The averaging procedure described in this work is applied to the averaging of MCG data as an example, but with its intrinsic properties it can also be applied to the averaging of ECG recording, averaging of body-surface-potential mapping (BSPM) and averaging of magnetoencephalographic (MEG) or electroencephalographic (EEG) signals.
Waste management system requirements document
International Nuclear Information System (INIS)
1991-02-01
This volume defines the top level requirements for the Mined Geologic Disposal System (MGDS). It is designed to be used in conjunction with Volume 1 of the WMSR, General System Requirements. It provides a functional description expanding the requirements allocated to the MGDS in Volume 1 and elaborates on each requirement by providing associated performance criteria as appropriate. Volumes 1 and 4 of the WMSR provide a minimum set of requirements that must be satisfied by the final MGDS design. This document sets forth specific requirements that must be fulfilled. It is not the intent or purpose of this top level document to describe how each requirement is to be satisfied in the final MGDS design. Each subsequent level of the technical document hierarchy must provide further guidance and definition as to how each of these requirements is to be implemented in the design. It is expected that each subsequent level of requirements will be significantly more detailed. Section 2 of this volume provides a functional description of the MGDS. Each function is addressed in terms of requirements, and performance criteria. Section 3 provides a list of controlling documents. Each document cited in a requirement of Chapter 2 is included in this list and is incorporated into this document as a requirement on the final system. The WMSR addresses only federal requirements (i.e., laws, regulations and DOE orders). State and local requirements are not addressed. However, it will be specifically noted at the potentially affected WMSR requirements that there could be additional or more stringent regulations imposed by a state or local requirements or administering agency over the cited federal requirements
Development of Automatic Visceral Fat Volume Calculation Software for CT Volume Data
Directory of Open Access Journals (Sweden)
Mitsutaka Nemoto
2014-01-01
Full Text Available Objective. To develop automatic visceral fat volume calculation software for computed tomography (CT volume data and to evaluate its feasibility. Methods. A total of 24 sets of whole-body CT volume data and anthropometric measurements were obtained, with three sets for each of four BMI categories (under 20, 20 to 25, 25 to 30, and over 30 in both sexes. True visceral fat volumes were defined on the basis of manual segmentation of the whole-body CT volume data by an experienced radiologist. Software to automatically calculate visceral fat volumes was developed using a region segmentation technique based on morphological analysis with CT value threshold. Automatically calculated visceral fat volumes were evaluated in terms of the correlation coefficient with the true volumes and the error relative to the true volume. Results. Automatic visceral fat volume calculation results of all 24 data sets were obtained successfully and the average calculation time was 252.7 seconds/case. The correlation coefficients between the true visceral fat volume and the automatically calculated visceral fat volume were over 0.999. Conclusions. The newly developed software is feasible for calculating visceral fat volumes in a reasonable time and was proved to have high accuracy.
Ayers, Gregory D; McKinley, Eliot T; Zhao, Ping; Fritz, Jordan M; Metry, Rebecca E; Deal, Brenton C; Adlerz, Katrina M; Coffey, Robert J; Manning, H Charles
2010-06-01
The volume of subcutaneous xenograft tumors is an important metric of disease progression and response to therapy in preclinical drug development. Noninvasive imaging technologies suitable for measuring xenograft volume are increasingly available, yet manual calipers, which are susceptible to inaccuracy and bias, are routinely used. The goal of this study was to quantify and compare the accuracy, precision, and inter-rater variability of xenograft tumor volume assessment by caliper measurements and ultrasound imaging. Subcutaneous xenograft tumors derived from human colorectal cancer cell lines (DLD1 and SW620) were generated in athymic nude mice. Experienced independent reviewers segmented 3-dimensional ultrasound data sets and collected manual caliper measurements resulting in tumor volumes. Imaging- and caliper-derived volumes were compared with the tumor mass, the reference standard, determined after resection. Bias, precision, and inter-rater differences were estimated for each mouse among reviewers. Bootstrapping was used to estimate mean and confidence intervals of variance components, intraclass correlation coefficients (ICCs), and confidence intervals for each source of variation. The average deviation from the true volume and inter-rater differences were significantly lower for ultrasound volumes compared with caliper volumes (P = .0005 and .001, respectively). Reviewer ICCs for ultrasound and caliper measurements were similarly low (1%), yet caliper volume variance was 1.3-fold higher than for ultrasound. Ultrasound imaging more accurately, precisely, and reproducibly reflects xenograft tumor volume than caliper measurements. These data suggest that preclinical studies using the xenograft burden as a surrogate end point measured by ultrasound imaging require up to 30% fewer animals to reach statistical significance compared with analogous studies using caliper measurements.
SPATIAL DISTRIBUTION OF THE AVERAGE RUNOFF IN THE IZA AND VIȘEU WATERSHEDS
Directory of Open Access Journals (Sweden)
HORVÁTH CS.
2015-03-01
Full Text Available The average runoff represents the main parameter with which one can best evaluate an area’s water resources and it is also an important characteristic in al river runoff research. In this paper we choose a GIS methodology for assessing the spatial evolution of the average runoff, using validity curves we identifies three validity areas in which the runoff changes differently with altitude. The tree curves were charted using the average runoff values of 16 hydrometric stations from the area, eight in the Vișeu and eight in the Iza river catchment. Identifying the appropriate areas of the obtained correlations curves (between specific average runoff and catchments mean altitude allowed the assessment of potential runoff at catchment level and on altitudinal intervals. By integrating the curves functions in to GIS we created an average runoff map for the area; from which one can easily extract runoff data using GIS spatial analyst functions. The study shows that from the three areas the highest runoff corresponds with the third zone but because it’s small area the water volume is also minor. It is also shown that with the use of the created runoff map we can compute relatively quickly correct runoff values for areas without hydrologic control.
A simple consensus algorithm for distributed averaging in random ...
Indian Academy of Sciences (India)
http://www.ias.ac.in/article/fulltext/pram/079/03/0493-0499. Keywords. Sensor networks; random geographical networks; distributed averaging; consensus algorithms. Abstract. Random geographical networks are realistic models for wireless sensor networks which are used in many applications. Achieving average ...
20 CFR 404.221 - Computing your average monthly wage.
2010-04-01
... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... years; or (iii) 1974, we count the years beginning with 1951 and ending with the year before you reached...
Averaged EMG profiles in jogging and running at different speeds
Gazendam, Marnix G. J.; Hof, At L.
EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Exact Membership Functions for the Fuzzy Weighted Average
van den Broek, P.M.; Noppen, J.A.R.
2011-01-01
The problem of computing the fuzzy weighted average, where both attributes and weights are fuzzy numbers, is well studied in the literature. Generally, the approach is to apply Zadeh’s extension principle to compute α-cuts of the fuzzy weighted average from the α-cuts of the attributes and weights
The Average Covering Tree Value for Directed Graph Games
Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.
2012-01-01
Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
18 CFR 301.7 - Average System Cost methodology functionalization.
2010-04-01
... SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each Account... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost...
Naumchev, Alexandr; Meyer, Bertrand
2017-01-01
Popular notations for functional requirements specifications frequently ignore developers' needs, target specific development models, or require translation of requirements into tests for verification; the results can give out-of-sync or downright incompatible artifacts. Seamless Requirements, a new approach to specifying functional requirements, contributes to developers' understanding of requirements and to software quality regardless of the process, while the process itself becomes lighter...
Bootstrapping pre-averaged realized volatility under market microstructure noise
DEFF Research Database (Denmark)
Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour
The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...
Super convergence of ergodic averages for quasiperiodic orbits
Das, Suddhasattwa; Yorke, James A.
2018-02-01
The Birkhoff ergodic theorem asserts that time averages of a function evaluated along a trajectory of length N converge to the space average, the integral of f, as N\\to∞ , for ergodic dynamical systems. But that convergence can be slow. Instead of uniform averages that assign equal weights to points along the trajectory, we use an average with a non-uniform distribution of weights, weighting the early and late points of the trajectory much less than those near the midpoint N/2 . We show that in quasiperiodic dynamical systems, our weighted averages converge far faster provided f is sufficiently differentiable. This result can be applied to obtain efficient numerical computation of rotation numbers, invariant densities and conjugacies of quasiperiodic systems.
Some series of intuitionistic fuzzy interactive averaging aggregation operators.
Garg, Harish
2016-01-01
In this paper, some series of new intuitionistic fuzzy averaging aggregation operators has been presented under the intuitionistic fuzzy sets environment. For this, some shortcoming of the existing operators are firstly highlighted and then new operational law, by considering the hesitation degree between the membership functions, has been proposed to overcome these. Based on these new operation laws, some new averaging aggregation operators namely, intuitionistic fuzzy Hamacher interactive weighted averaging, ordered weighted averaging and hybrid weighted averaging operators, labeled as IFHIWA, IFHIOWA and IFHIHWA respectively has been proposed. Furthermore, some desirable properties such as idempotency, boundedness, homogeneity etc. are studied. Finally, a multi-criteria decision making method has been presented based on proposed operators for selecting the best alternative. A comparative concelebration between the proposed operators and the existing operators are investigated in detail.
Changes in the air cell volume of artificially incubated ostrich eggs ...
African Journals Online (AJOL)
A total of 2160 images of candled, incubated ostrich eggs were digitized to determine the percentage of egg volume occupied by the air cell at different stages of incubation. The air cell on average occupied 2.5% of the volume of fresh eggs. For eggs that hatched successfully, this volume increased to an average of 24.4% ...
The growth of the mean average crossing number of equilateral polygons in confinement
International Nuclear Information System (INIS)
Arsuaga, J; Borgo, B; Scharein, R; Diao, Y
2009-01-01
The physical and biological properties of collapsed long polymer chains as well as of highly condensed biopolymers (such as DNA in all organisms) are known to be determined, at least in part, by their topological and geometrical properties. With this purpose of characterizing the topological properties of such condensed systems equilateral random polygons restricted to confined volumes are often used. However, very few analytical results are known. In this paper, we investigate the effect of volume confinement on the mean average crossing number (ACN) of equilateral random polygons. The mean ACN of knots and links under confinement provides a simple alternative measurement for the topological complexity of knots and links in the statistical sense. For an equilateral random polygon of n segments without any volume confinement constrain, it is known that its mean ACN (ACN) is of the order 3/16 n log n + O(n). Here we model the confining volume as a simple sphere of radius R. We provide an analytical argument which shows that (ACN) of an equilateral random polygon of n segments under extreme confinement (meaning R 2 ). We propose to model the growth of (ACN) as a(R)n 2 + b(R)nln(n) under a less-extreme confinement condition, where a(R) and b(R) are functions of R with R being the radius of the confining sphere. Computer simulations performed show a fairly good fit using this model.
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Application of Depth-Averaged Velocity Profile for Estimation of Longitudinal Dispersion in Rivers
Directory of Open Access Journals (Sweden)
Mohammad Givehchi
2010-01-01
Full Text Available River bed profiles and depth-averaged velocities are used as basic data in empirical and analytical equations for estimating the longitudinal dispersion coefficient which has always been a topic of great interest for researchers. The simple model proposed by Maghrebi is capable of predicting the normalized isovel contours in the cross section of rivers and channels as well as the depth-averaged velocity profiles. The required data in Maghrebi’s model are bed profile, shear stress, and roughness distributions. Comparison of depth-averaged velocities and longitudinal dispersion coefficients observed in the field data and those predicted by Maghrebi’s model revealed that Maghrebi’s model had an acceptable accuracy in predicting depth-averaged velocity.
Cauble, Galen D.; Wayne, David T.
2017-09-01
The growth of optical communication has created a need to correctly characterize the atmospheric channel. Atmospheric turbulence along a given channel can drastically affect optical communication signal quality. One means of characterizing atmospheric turbulence is through measurement of the refractive index structure parameter, Cn2. When calculating Cn2 from the scintillation index, σΙ2,the point aperture scintillation index is required. Direct measurement of the point aperture scintillation index is difficult at long ranges due to the light collecting abilities of small apertures. When aperture size is increased past the atmospheric correlation width, aperture averaging decreases the scintillation index below that of the point aperture scintillation index. While the aperture averaging factor can be calculated from theory, it does not often agree with experimental results. Direct measurement of the aperture averaging factor via the pupil plane irradiance covariance function allows conversion from the aperture averaged scintillation index to the point aperture scintillation index. Using a finite aperture, camera, and detector, the aperture averaged scintillation index and aperture averaging factor are measured in parallel and the point aperture scintillation index is calculated. A new instrument built by SSC Pacific was used to collect scintillation data at the Townes Institute Science and Technology Experimentation Facility (TISTEF). This new instrument's data was then compared to BLS900 data. The results show that direct measurement of the aperture averaging factor is achievable using a camera and matches well with groundtruth instrumentation.
Average action for the N-component φ4 theory
International Nuclear Information System (INIS)
Ringwald, A.; Wetterich, C.
1990-01-01
The average action is a continuum version of the block spin action in lattice field theories. We compute the one-loop approximation to the average potential for the N-component φ 4 theory in the spontaneously broken phase. For a finite (linear) block size ∝ anti k -1 this potential is real and nonconvex. For small φ the average potential is quadratic, U k =-1/2anti k 2 φ 2 , and independent of the original mass parameter and quartic coupling constant. It approaches the convex effective potential as anti k vanishes. (orig.)
Simultaneous inference for model averaging of derived parameters
DEFF Research Database (Denmark)
Jensen, Signe Marie; Ritz, Christian
2015-01-01
Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...
Salecker-Wigner-Peres clock and average tunneling times
Energy Technology Data Exchange (ETDEWEB)
Lunardi, Jose T., E-mail: jttlunardi@uepg.b [Departamento de Matematica e Estatistica, Universidade Estadual de Ponta Grossa, Av. General Carlos Cavalcanti, 4748. Cep 84030-000, Ponta Grossa, PR (Brazil); Manzoni, Luiz A., E-mail: manzoni@cord.ed [Department of Physics, Concordia College, 901 8th St. S., Moorhead, MN 56562 (United States); Nystrom, Andrew T., E-mail: atnystro@cord.ed [Department of Physics, Concordia College, 901 8th St. S., Moorhead, MN 56562 (United States)
2011-01-17
The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).
Averaging underwater noise levels for environmental assessment of shipping.
Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John
2012-10-01
Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics.
The asymptotic average-shadowing property and transitivity for flows
International Nuclear Information System (INIS)
Gu Rongbao
2009-01-01
The asymptotic average-shadowing property is introduced for flows and the relationships between this property and transitivity for flows are investigated. It is shown that a flow on a compact metric space is chain transitive if it has positively (or negatively) asymptotic average-shadowing property and a positively (resp. negatively) Lyapunov stable flow is positively (resp. negatively) topologically transitive provided it has positively (resp. negatively) asymptotic average-shadowing property. Furthermore, two conditions for which a flow is a minimal flow are obtained.
Energy Technology Data Exchange (ETDEWEB)
Beamesderfer, Raymond C.; Nigro, Anthony A. [Oregon Dept. of Fish and Wildlife, Clackamas, OR (US)
1995-01-01
This is the final report for research on white sturgeon Acipenser transmontanus from 1986--92 and conducted by the National Marine Fisheries Service (NMFS), Oregon Department of Fish and Wildlife (ODFW), US Fish and Wildlife Service (USFWS), and Washington Department of Fisheries (WDF). Findings are presented as a series of papers, each detailing objectives, methods, results, and conclusions for a portion of this research. This volume includes supplemental papers which provide background information needed to support results of the primary investigations addressed in Volume 1. This study addresses measure 903(e)(1) of the Northwest Power Planning Council's 1987 Fish and Wildlife Program that calls for ''research to determine the impact of development and operation of the hydropower system on sturgeon in the Columbia River Basin.'' Study objectives correspond to those of the ''White Sturgeon Research Program Implementation Plan'' developed by BPA and approved by the Northwest Power Planning Council in 1985. Work was conducted on the Columbia River from McNary Dam to the estuary.
on the performance of Autoregressive Moving Average Polynomial
African Journals Online (AJOL)
Timothy Ademakinwa
Using numerical example, DL, PDL, ARPDL and. ARMAPDL models were fitted. Autoregressive Moving Average Polynomial Distributed Lag Model. (ARMAPDL) model performed better than the other models. Keywords: Distributed Lag Model, Selection Criterion, Parameter Estimation, Residual Variance. ABSTRACT. 247.
Safety Impact of Average Speed Control in the UK
DEFF Research Database (Denmark)
Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert
2016-01-01
There is considerable safety potential in ensuring that motorists respect the speed limits. High speeds increase the number and severity of accidents. Technological development over the last 20 years has enabled the development of systems that allow automatic speed control. The first generation...... or section control. This article discusses the different methods for automatic speed control and presents an evaluation of the safety effects of average speed control, documented through changes in speed levels and accidents before and after the implementation of average speed control at selected sites...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....
High Average Power Fiber Laser for Satellite Communications, Phase I
National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...
Medicare Part B Drug Average Sales Pricing Files
U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...
Time averaging, ageing and delay analysis of financial time series
Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf
2017-06-01
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
GIS Tools to Estimate Average Annual Daily Traffic
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
A high speed digital signal averager for pulsed NMR
International Nuclear Information System (INIS)
Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.
1978-01-01
A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)
Historical Data for Average Processing Time Until Hearing Held
Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...
Non-chain pulsed DF laser with an average power of the order of 100 W
Pan, Qikun; Xie, Jijiang; Wang, Chunrui; Shao, Chunlei; Shao, Mingzhen; Chen, Fei; Guo, Jin
2016-07-01
The design and performance of a closed-cycle repetitively pulsed DF laser are described. The Fitch circuit and thyratron switch are introduced to realize self-sustained volume discharge in SF6-D2 mixtures. The influences of gas parameters and charging voltage on output characteristics of non-chain pulsed DF laser are experimentally investigated. In order to improve the laser power stability over a long period of working time, zeolites with different apertures are used to scrub out the de-excitation particles produced in electric discharge. An average output power of the order of 100 W was obtained at an operating repetition rate of 50 Hz, with amplitude difference in laser pulses <8 %. And under the action of micropore alkaline zeolites, the average power fell by 20 % after the laser continuing working 100 s at repetition frequency of 50 Hz.
Average geodesic distance of skeleton networks of Sierpinski tetrahedron
Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao
2018-04-01
The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.
Annual average equivalent dose of workers form health area
International Nuclear Information System (INIS)
Daltro, T.F.L.; Campos, L.L.
1992-01-01
The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)
The average action for scalar fields near phase transitions
International Nuclear Information System (INIS)
Wetterich, C.
1991-08-01
We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)
Bounds on the Average Sensitivity of Nested Canalizing Functions
Klotz, Johannes Georg; Heckel, Reinhard; Schober, Steffen
2012-01-01
Nested canalizing Boolean (NCF) functions play an important role in biological motivated regulative networks and in signal processing, in particular describing stack filters. It has been conjectured that NCFs have a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of (random) Boolean networks. Here we provide a tight upper bound on the average sensitivity for NCFs as a function of the number of relevant input vari...
Bivariate copulas on the exponentially weighted moving average control chart
Directory of Open Access Journals (Sweden)
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
A precise measurement of the average b hadron lifetime
Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G
1996-01-01
An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.
Demonstration of a Model Averaging Capability in FRAMES
Meyer, P. D.; Castleton, K. J.
2009-12-01
Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.
Fluctuations of trading volume in a stock market
Hong, Byoung Hee; Lee, Kyoung Eun; Hwang, Jun Kyung; Lee, Jae Woo
2009-03-01
We consider the probability distribution function of the trading volume and the volume changes in the Korean stock market. The probability distribution function of the trading volume shows double peaks and follows a power law, P(V/)∼( at the tail part of the distribution with α=4.15(4) for the KOSPI (Korea composite Stock Price Index) and α=4.22(2) for the KOSDAQ (Korea Securities Dealers Automated Quotations), where V is the trading volume and is the monthly average value of the trading volume. The second peaks originate from the increasing trends of the average volume. The probability distribution function of the volume changes also follows a power law, P(Vr)∼Vr-β, where Vr=V(t)-V(t-T) and T is a time lag. The exponents β depend on the time lag T. We observe that the exponents β for the KOSDAQ are larger than those for the KOSPI.
Maddix, Danielle C.; Sampaio, Luiz; Gerritsen, Margot
2018-05-01
The degenerate parabolic Generalized Porous Medium Equation (GPME) poses numerical challenges due to self-sharpening and its sharp corner solutions. For these problems, we show results for two subclasses of the GPME with differentiable k (p) with respect to p, namely the Porous Medium Equation (PME) and the superslow diffusion equation. Spurious temporal oscillations, and nonphysical locking and lagging have been reported in the literature. These issues have been attributed to harmonic averaging of the coefficient k (p) for small p, and arithmetic averaging has been suggested as an alternative. We show that harmonic averaging is not solely responsible and that an improved discretization can mitigate these issues. Here, we investigate the causes of these numerical artifacts using modified equation analysis. The modified equation framework can be used for any type of discretization. We show results for the second order finite volume method. The observed problems with harmonic averaging can be traced to two leading error terms in its modified equation. This is also illustrated numerically through a Modified Harmonic Method (MHM) that can locally modify the critical terms to remove the aforementioned numerical artifacts.
International Nuclear Information System (INIS)
Sato, Kaoru; Manabe, Kentaro; Endo, Akira
2012-01-01
Average adult Japanese male (JM-103) and female (JF-103) voxel (volume pixel) phantoms newly constructed at the Japan Atomic Energy Agency (JAEA) have average characteristics of body sizes and organ masses in adult Japanese. In JM-103 and JF-103, several organs and tissues were newly modeled for dose assessments based on tissue weighting factors of the 2007 Recommendations of the International Commission on Radiological Protection(ICRP). In this study, SAFs for thyroid, stomach, lungs and lymphatic nodes of JM-103 and JF-103 phantoms were calculated, and were compared with those of other adult Japanese phantoms based on individual medical images. In most cases, differences in SAFs between JM-103, JF-103 and other phantoms were about several tens percent, and was mainly attributed to mass differences of organs, tissues and contents. Therefore, it was concluded that SAFs of JM-103 and JF-103 represent those of average adult Japanese and that the two phantoms are applied to dose assessment for average adult Japanese on the basis of the 2007 Recommendations. (author)
Confinement, average forces, and the Ehrenfest theorem for a one ...
Indian Academy of Sciences (India)
Home; Journals; Pramana – Journal of Physics; Volume 80; Issue 5. Confinement ... A free particle moving on the entire real line, which is then permanently confined to a line segment or `a box' (this situation is achieved by taking the limit V 0 → ∞ in a finite well potential). This case is .... Please take note of this change.
Average weighted receiving time in recursive weighted Koch networks
Indian Academy of Sciences (India)
Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 6 ... Nonlinear Scientific Research Center, Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu, 212013, People's Republic of China; School of Computer Science and Telecommunication Engineering, Jiangsu University, Zhenjiang, 212013, People's ...
40 CFR 63.7943 - How do I determine the average VOHAP concentration of my remediation material?
2010-07-01
... concentration of my remediation material? 63.7943 Section 63.7943 Protection of Environment ENVIRONMENTAL... Remediation Performance Tests § 63.7943 How do I determine the average VOHAP concentration of my remediation material? (a) General requirements. You must determine the average total VOHAP concentration of a...
Estimating average glandular dose by measuring glandular rate in mammograms
International Nuclear Information System (INIS)
Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru
2003-01-01
The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)
Accurate phenotyping: Reconciling approaches through Bayesian model averaging.
Directory of Open Access Journals (Sweden)
Carla Chia-Ming Chen
Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.
The state support of small and average entrepreneurship in Ukraine
Directory of Open Access Journals (Sweden)
Т.О. Melnyk
2015-03-01
Full Text Available Purposes, principles and the basic directions of a state policy in development of small and average business in Ukraine are defined. Conditions and restrictions in granting of the state support to subjects of small and average business are outlined. The modern infrastructure of business support by regions is considered. Different kinds of the state support of small and average business are characterized: financial, information, consulting, in sphere of innovations, science and industrial production, subjects who conduct export activity, in sphere of preparation, retraining and improvement of professional skill of administrative and business dealing stuff. Approaches to reforming the state control of small and average business are generalized, esp. in aspects of risk degree estimation of economic activities, quantity and frequency of checks, registration of certificates which are made by the results of planned state control actions, creation of the effective mechanism of the state control bodies coordination. The most perspective directions of the state support of small and average business in Ukraine in modern economic conditions are defined.
High average power parametric frequency conversion-new concepts and new pump sources
Energy Technology Data Exchange (ETDEWEB)
Velsko, S.P.; Webb, M.S.
1994-03-01
A number of applications, including long range remote sensing and antisensor technology, require high average power tunable radiation in several distinct spectral regions. Of the many issues which determine the deployability of optical parametric oscillators (OPOS) and related systems, efficiency and simplicity are among the most important. It is only recently that the advent of compact diode laser pumped solid state lasers has produced pump sources for parametric oscillators which can make compact, efficient, high average power tunable sources possible. In this paper we outline several different issues in parametric oscillator and pump laser development which are currently under study at Lawrence Livermore National Laboratory.
Object detection by correlation coefficients using azimuthally averaged reference projections.
Nicholson, William V
2004-11-01
A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.
The Health Effects of Income Inequality: Averages and Disparities.
Truesdale, Beth C; Jencks, Christopher
2016-01-01
Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.
Bounds on the average sensitivity of nested canalizing functions.
Directory of Open Access Journals (Sweden)
Johannes Georg Klotz
Full Text Available Nested canalizing Boolean functions (NCF play an important role in biologically motivated regulatory networks and in signal processing, in particular describing stack filters. It has been conjectured that NCFs have a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of (random Boolean networks. Here we provide a tight upper bound on the average sensitivity of NCFs as a function of the number of relevant input variables. As conjectured in literature this bound is smaller than 4/3. This shows that a large number of functions appearing in biological networks belong to a class that has low average sensitivity, which is even close to a tight lower bound.
Bounds on the average sensitivity of nested canalizing functions.
Klotz, Johannes Georg; Heckel, Reinhard; Schober, Steffen
2013-01-01
Nested canalizing Boolean functions (NCF) play an important role in biologically motivated regulatory networks and in signal processing, in particular describing stack filters. It has been conjectured that NCFs have a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of (random) Boolean networks. Here we provide a tight upper bound on the average sensitivity of NCFs as a function of the number of relevant input variables. As conjectured in literature this bound is smaller than 4/3. This shows that a large number of functions appearing in biological networks belong to a class that has low average sensitivity, which is even close to a tight lower bound.
The Role of the Harmonic Vector Average in Motion Integration
Directory of Open Access Journals (Sweden)
Alan eJohnston
2013-10-01
Full Text Available The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC solution. Here a new combination rule, the harmonic vector average (HVA, is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The harmonic vector average however provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the intersection of constraints direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the harmonic vector average.
Optimal and fast rotational alignment of volumes with missing data in Fourier space.
Shatsky, Maxim; Arbelaez, Pablo; Glaeser, Robert M; Brenner, Steven E
2013-11-01
Electron tomography of intact cells has the potential to reveal the entire cellular content at a resolution corresponding to individual macromolecular complexes. Characterization of macromolecular complexes in tomograms is nevertheless an extremely challenging task due to the high level of noise, and due to the limited tilt angle that results in missing data in Fourier space. By identifying particles of the same type and averaging their 3D volumes, it is possible to obtain a structure at a more useful resolution for biological interpretation. Currently, classification and averaging of sub-tomograms is limited by the speed of computational methods that optimize alignment between two sub-tomographic volumes. The alignment optimization is hampered by the fact that the missing data in Fourier space has to be taken into account during the rotational search. A similar problem appears in single particle electron microscopy where the random conical tilt procedure may require averaging of volumes with a missing cone in Fourier space. We present a fast implementation of a method guaranteed to find an optimal rotational alignment that maximizes the constrained cross-correlation function (cCCF) computed over the actual overlap of data in Fourier space. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Investigation of a New Weighted Averaging Method to Improve SNR of Electrocochleography Recordings.
Kumaragamage, Chathura Lahiru; Lithgow, Brian John; Moussavi, Zahra Kazem
2016-02-01
The aim of this study was to investigate methods to improve signal-to-noise ratio (SNR) of extratympanic electrocochleography (ET-ECOG); a low SNR electrophysiological measurement technique. The current standard for ET-ECOG involves acquiring and uniform averaging ∼1000 evoked responses to reveal the signal of interest. Weighted averaging is commonly employed to enhance SNR of repetitive signals in the presence of a nonstationary noise, yet its efficacy in ET-ECOG has not been explored to date, which was the focus of this study. Conventional techniques used to compute signal statistics required for weighted averaging were found to be ineffective for ET-ECOG due to low SNR; therefore, a modified correlation coefficient-based approach was derived to quantify the "signal" component. Several variants of weighted averaging schemes were implemented and evaluated on 54 ECOG recordings obtained from seven healthy volunteers. The best weighted averaging scheme provided a 17% (p standard deviation (STD) of the noise ratio] compared to uniform averaging, and further improved to 22% (p variance of the noise was incorporated as a cost factor. The implemented weighted averaging schemes were robust and effective for variants of ET-ECOG recording protocols investigated. Weighted averaging improved SNR of low amplitude ET-ECOG recordings in the presence of nonstationary noise. SNR improvements for ECOG have significant benefits in clinical applications; the variability associated with biofeatures extracted can be reduced, and may lead to shorter recordings. Methods described in this study can easily be incorporated in other low SNR repetitive electrophysiological measurement techniques.
Vehicle target detection method based on the average optical flow
Ma, J. Y.; Jie, F. R.; Hu, Y. J.
2017-07-01
Moving target detection in image sequence for dynamic scene is an important research topic in the field of computer vision. Block projection and matching are utilized for global motion estimation. Then, the background image is compensated applying the estimated motion parameters so as to stabilize the image sequence. Consequently, background subtraction is employed in the stabilized image sequence to extract moving targets. Finally, divide the difference image into uniform grids and average optical flow is employed for motion analysis. Experiment tests show that the proposed average optical flow method can efficiently extract the vehicle targets from dynamic scene meanwhile decreasing the false alarm.
Bounce-averaged Fokker-Planck code for stellarator transport
International Nuclear Information System (INIS)
Mynick, H.E.; Hitchon, W.N.G.
1985-07-01
A computer code for solving the bounce-averaged Fokker-Planck equation appropriate to stellarator transport has been developed, and its first applications made. The code is much faster than the bounce-averaged Monte-Carlo codes, which up to now have provided the most efficient numerical means for studying stellarator transport. Moreover, because the connection to analytic kinetic theory of the Fokker-Planck approach is more direct than for the Monte-Carlo approach, a comparison of theory and numerical experiment is now possible at a considerably more detailed level than previously
Research & development and growth: A Bayesian model averaging analysis
Czech Academy of Sciences Publication Activity Database
Horváth, Roman
2011-01-01
Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economics Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf
Non-self-averaging nucleation rate due to quenched disorder
International Nuclear Information System (INIS)
Sear, Richard P
2012-01-01
We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)
Spatial Averaging Combined with a Perturbation/Iteration Procedure
Directory of Open Access Journals (Sweden)
F. E. C. Culick
2012-09-01
have caused some confusion. The paper ends with a brief discussion answering a serious criticism, of the method, nearly fifteen years old. The basis for the criticism, arising from solution to a relatively simple problem, is shown to be a result of an omission of a term that arises when the average density in a flow changes abruptly. Presently, there is no known problem of combustion instability for which the kind of analysis discussed here is not applicable. The formalism is general; much effort is generally required to apply the analysis to a particular problem. A particularly significant point, not elaborated here, is the inextricable dependence on expansion of the equations and their boundary conditions, in two small parameters, measures of the steady and unsteady flows. Whether or not those Mach numbers are actually ‘small’ in fact, is really beside the point. Work out applications of the method as if they were! Then maybe to get more accurate results, resort to some form of CFD. It is a huge practical point that the approach taken and advocated here cannot be expected to give precise results, but however accurate they may be, they will be obtained with relative ease and will always be instructive. In any case, the expansions must be carried out carefully with faithful attention to the rules of systematic procedures. Otherwise, inadvertent errors may arise from inclusion or exclusion of contributions. I state without proof or further examples that the general method discussed here has been quite well and widely tested for practical systems much more complex than those normally studied in the laboratory. Every case has shown encouraging results. Thus the lifetimes of approximate analyses developed before computing resources became commonplace seem to be very long indeed.
A rotational integral formula for intrinsic volumes
DEFF Research Database (Denmark)
Jensen, Eva Bjørn Vedel; Rataj, J.
2008-01-01
A rotational version of the famous Crofton formula is derived. The motivation for deriving the formula comes from local stereology, a new branch of stereology based on sections through fixed reference points. The formula shows how rotational averages of intrinsic volumes measured on sections...
Disc volume reduction with percutaneous nucleoplasty in an animal model.
Directory of Open Access Journals (Sweden)
Richard Kasch
Full Text Available STUDY DESIGN: We assessed volume following nucleoplasty disc decompression in lower lumbar spines from cadaveric pigs using 7.1Tesla magnetic resonance imaging (MRI. PURPOSE: To investigate coblation-induced volume reductions as a possible mechanism underlying nucleoplasty. METHODS: We assessed volume following nucleoplastic disc decompression in pig spines using 7.1-Tesla MRI. Volumetry was performed in lumbar discs of 21 postmortem pigs. A preoperative image data set was obtained, volume was determined, and either disc decompression or placebo therapy was performed in a randomized manner. Group 1 (nucleoplasty group was treated according to the usual nucleoplasty protocol with coblation current applied to 6 channels for 10 seconds each in an application field of 360°; in group 2 (placebo group the same procedure was performed but without coblation current. After the procedure, a second data set was generated and volumes calculated and matched with the preoperative measurements in a blinded manner. To analyze the effectiveness of nucleoplasty, volumes between treatment and placebo groups were compared. RESULTS: The average preoperative nucleus volume was 0.994 ml (SD: 0.298 ml. In the nucleoplasty group (n = 21 volume was reduced by an average of 0.087 ml (SD: 0.110 ml or 7.14%. In the placebo group (n = 21 volume was increased by an average of 0.075 ml (SD: 0.075 ml or 8.94%. The average nucleoplasty-induced volume reduction was 0.162 ml (SD: 0.124 ml or 16.08%. Volume reduction in lumbar discs was significant in favor of the nucleoplasty group (p<0.0001. CONCLUSIONS: Our study demonstrates that nucleoplasty has a volume-reducing effect on the lumbar nucleus pulposus in an animal model. Furthermore, we show the volume reduction to be a coblation effect of nucleoplasty in porcine discs.
Directory of Open Access Journals (Sweden)
Rajagopalan Parameshwaran
2008-01-01
Full Text Available In the quest for energy conservative building design, there is now a great opportunity for a flexible and sophisticated air conditioning system capable of addressing better thermal comfort, indoor air quality, and energy efficiency, that are strongly desired. The variable refrigerant volume air conditioning system provides considerable energy savings, cost effectiveness and reduced space requirements. Applications of intelligent control like fuzzy logic controller, especially adapted to variable air volume air conditioning systems, have drawn more interest in recent years than classical control systems. An experimental analysis was performed to investigate the inherent operational characteristics of the combined variable refrigerant volume and variable air volume air conditioning systems under fixed ventilation, demand controlled ventilation, and combined demand controlled ventilation and economizer cycle techniques for two seasonal conditions. The test results of the variable refrigerant volume and variable air volume air conditioning system for each techniques are presented. The test results infer that the system controlled by fuzzy logic methodology and operated under the CO2 based mechanical ventilation scheme, effectively yields 37% and 56% per day of average energy-saving in summer and winter conditions, respectively. Based on the experimental results, the fuzzy based combined system can be considered to be an alternative energy efficient air conditioning scheme, having significant energy-saving potential compared to the conventional constant air volume air conditioning system.
26 CFR 1.1301-1 - Averaging of farm income.
2010-04-01
... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... the Collection of Income Tax at Source on Wages (Federal income tax withholding), or the amount of net... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1...
Error estimates in horocycle averages asymptotics: challenges from string theory
Cardella, M.A.
2010-01-01
For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth
Average weighted receiving time in recursive weighted Koch networks
Indian Academy of Sciences (India)
https://www.ias.ac.in/article/fulltext/pram/086/06/1173-1182. Keywords. Weighted Koch network; recursive division method; average weighted receiving time. Abstract. Motivated by the empirical observation in airport networks and metabolic networks, we introduce the model of the recursive weighted Koch networks created ...
Pareto Principle in Datamining: an Above-Average Fencing Algorithm
Directory of Open Access Journals (Sweden)
K. Macek
2008-01-01
Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.
Average Distance Travelled To School by Primary and Secondary ...
African Journals Online (AJOL)
This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...
Determination of the average lifetime of bottom hadrons
International Nuclear Information System (INIS)
Althoff, M.; Braunschweig, W.; Kirschfink, F.J.; Martyn, H.U.; Rosskamp, P.; Schmitz, D.; Siebke, H.; Wallraff, W.; Hilger, E.; Kracht, T.; Krasemann, H.L.; Leu, P.; Lohrmann, E.; Pandoulas, D.; Poelz, G.; Poesnecker, K.U.; Duchovni, E.; Eisenberg, Y.; Karshon, U.; Mikenberg, G.; Mir, R.; Revel, D.; Shapira, A.; Baranko, G.; Caldwell, A.; Cherney, M.; Izen, J.M.; Mermikides, M.; Ritz, S.; Rudolph, G.; Strom, D.; Takashima, M.; Venkataramania, H.; Wicklund, E.; Wu, S.L.; Zobernig, G.
1984-01-01
We have determined the average lifetime of hadrons containing b quarks produced in e + e - annihilation to be tausub(B)=1.83x10 -12 s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI)
A depth semi-averaged model for coastal dynamics
Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.
2017-05-01
The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.
Trend of Average Wages as Indicator of Hypothetical Money Illusion
Directory of Open Access Journals (Sweden)
Julian Daszkowski
2010-06-01
Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.
40 CFR 63.1332 - Emissions averaging provisions.
2010-07-01
... other controls for a Group 1 storage vessel, batch process vent, aggregate batch vent stream, continuous... calculated using the procedures in § 63.1323(b). (B) If the batch process vent is controlled using a control... pollution prevention in generating emissions averaging credits. (1) Storage vessels, batch process vents...
Implications of Methodist clergies' average lifespan and missional ...
African Journals Online (AJOL)
We are born, we touch the lives of others, we die – and then we are remembered. For the purpose of this article, I have assessed from obituaries the average lifespan of the clergy (ministers) in the Methodist Church of South Africa (MCSA), who died between 2003 and 2014. These obituaries were published in the ...
High Average Power UV Free Electron Laser Experiments At JLAB
International Nuclear Information System (INIS)
Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn
2012-01-01
Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.
Investigation of average daily water consumption and its impact on ...
African Journals Online (AJOL)
Investigation of average daily water consumption and its impact on weight gain in captive common buzzards ( Buteo buteo ) in Greece. ... At the end of 24 hours, the left over water was carefully brought out and re-measured to determine the quantity the birds have consumed. A control was set with a ceramic bowl with same ...
proposed average values of some engineering properties of palm ...
African Journals Online (AJOL)
2012-07-02
Jul 2, 2012 ... Coefficient of sliding friction of palm ker- nels. Gbadamosi [2] determined the coefficient of sliding friction of palm kernels using a bottomless four-sided container on adjustable tilting surface of plywood, gal- vanized steel, and glass. The average values were 0.38,. 0.45, and 0.44 for dura, tenera, and pisifera ...
40 CFR 80.67 - Compliance on average.
2010-07-01
... of this section apply to all reformulated gasoline and RBOB produced or imported for which compliance... use to ensure the gasoline is produced by the refiner or is imported by the importer and is used only... on average. (1) The VOC-controlled reformulated gasoline and RBOB produced at any refinery or...
Speckle averaging system for laser raster-scan image projection
Tiszauer, Detlev H.; Hackel, Lloyd A.
1998-03-17
The viewers' perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts.
Moving average rules as a source of market instability
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets
The background effective average action approach to quantum gravity
DEFF Research Database (Denmark)
D’Odorico, G.; Codello, A.; Pagani, C.
2016-01-01
of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....
On the average-case complexity of Shellsort
Vitányi, P.
We prove a lower bound expressed in the increment sequence on the average-case complexity of the number of inversions of Shellsort. This lower bound is sharp in every case where it could be checked. A special case of this lower bound yields the general Jiang-Li-Vitányi lower bound. We obtain new
Environmental stresses can alleviate the average deleterious effect of mutations
Directory of Open Access Journals (Sweden)
Leibler Stanislas
2003-05-01
Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.
Post-model selection inference and model averaging
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2011-07-01
Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.
Crystallographic extraction and averaging of data from small image areas
Perkins, GA; Downing, KH; Glaeser, RM
The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that
Establishment of Average Body Measurement and the Development ...
African Journals Online (AJOL)
cce
there is a change in their shape as well as in their size. This growth according to Aldrich ... Establishment of Average Body Measurement and the Development of Block Patterns for Pre-School Children. Igbo, C. A. (Ph.D). 62 ..... Poverty according to Igbo (2002) is one of the reasons for food insecurity. Inaccessibility and ...
Maximum and average field strength in enclosed environments
Leferink, Frank Bernardus Johannes
2010-01-01
Electromagnetic fields in large enclosed environments are reflected many times and cannot be predicted anymore using conventional models. The common approach is to compare such environments with highly reflecting reverberation chambers. The average field strength can easily be predicted using the
arXiv Averaged Energy Conditions and Bouncing Universes
Giovannini, Massimo
2017-11-16
The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.
Comparing averaging limits for social cues over space and time.
Florey, Joseph; Dakin, Steven C; Mareschal, Isabelle
2017-08-01
Observers are able to extract summary statistics from groups of faces, such as their mean emotion or identity. This can be done for faces presented simultaneously and also from sequences of faces presented at a fixed location. Equivalent noise analysis, which estimates an observer's internal noise (the uncertainty in judging a single element) and effective sample size (ESS; the effective number of elements being used to judge the average), reveals what limits an observer's averaging performance. It has recently been shown that observers have lower ESSs and higher internal noise for judging the mean gaze direction of a group of spatially distributed faces compared to the mean head direction of the same faces. In this study, we use the equivalent noise technique to compare limits on these two cues to social attention under two presentation conditions: spatially distributed and sequentially presented. We find that the differences in ESS are replicated in spatial arrays but disappear when both cue types are averaged over time, suggesting that limited peripheral gaze perception prevents accurate averaging performance. Correlation analysis across participants revealed generic limits for internal noise that may act across stimulus and presentation types, but no clear shared limits for ESS. This result supports the idea of some shared neural mechanisms b in early stages of visual processing.
Modeling of Sokoto Daily Average Temperature: A Fractional ...
African Journals Online (AJOL)
Modeling of Sokoto Daily Average Temperature: A Fractional Integration Approach. 22 extension of the class of ARIMA processes stemming from Box and Jenkins methodology. One of their originalities is the explicit modeling of the long term correlation structure (Diebolt and. Guiraud, 2000). Autoregressive fractionally.
Accuracy of averaged auditory brainstem response amplitude and latency estimates
DEFF Research Database (Denmark)
Madsen, Sara Miay Kim; M. Harte, James; Elberling, Claus
2017-01-01
Objective: The aims were to 1) establish which of the four algorithms for estimating residual noise level and signal-to-noise ratio (SNR) in auditory brainstem responses (ABRs) perform better in terms of post-average wave-V peak latency and amplitude errors and 2) determine whether SNR or noise...
Domain-averaged Fermi-hole Analysis for Solids
Czech Academy of Sciences Publication Activity Database
Baranov, A.; Ponec, Robert; Kohout, M.
2012-01-01
Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012
Significance of power average of sinusoidal and non-sinusoidal ...
Indian Academy of Sciences (India)
2016-06-08
Jun 8, 2016 ... Corresponding author. E-mail: venkatesh.sprv@gmail.com ... of the total power average technique, one can say whether the chaos in that nonlinear system is to be supppressed or not. Keywords. Chaos; controlling .... the instantaneous values of power taken during one complete cycle T and is given as.
94 GHz High-Average-Power Broadband Amplifier
National Research Council Canada - National Science Library
Luhmann, Neville
2003-01-01
A state-of-the-art gyro-TWT amplifier operating in the low loss TE01 mode has been developed with the objective of producing an average power of 140 kW in the W-Band with a predicted efficiency of 28%, 50dB gain, and 5% bandwidth...
Climate Prediction Center (CPC) Zonally Average 500 MB Temperature Anomalies
National Oceanic and Atmospheric Administration, Department of Commerce — This is one of the CPC?s Monthly Atmospheric and SST Indices. It is the 500-hPa temperature anomalies averaged over the latitude band 20oN ? 20oS. The anomalies are...
Average subentropy, coherence and entanglement of random mixed quantum states
Energy Technology Data Exchange (ETDEWEB)
Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)
2017-02-15
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
A simple consensus algorithm for distributed averaging in random ...
Indian Academy of Sciences (India)
guaranteed convergence with this simple algorithm. Keywords. Sensor networks; random geographical networks; distributed averaging; consensus algorithms. PACS Nos 89.75.Hc; 89.75.Fb; 89.20.Ff. 1. Introduction. Wireless sensor networks are increasingly used in many applications ranging from envi- ronmental to ...
40 CFR 63.150 - Emissions averaging provisions.
2010-07-01
... (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards for Organic Hazardous Air Pollutants From the Synthetic Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.150 Emissions averaging...
Calculation of average landslide frequency using climatic records
L. M. Reid
1998-01-01
Abstract - Aerial photographs are used to develop a relationship between the number of debris slides generated during a hydrologic event and the size of the event, and the long-term average debris-slide frequency is calculated from climate records using the relation.
Grade Point Average: What's Wrong and What's the Alternative?
Soh, Kay Cheng
2011-01-01
Grade point average (GPA) has been around for more than two centuries. However, it has created a lot of confusion, frustration, and anxiety to GPA-producers and users alike, especially when used across-nation for different purposes. This paper looks into the reasons for such a state of affairs from the perspective of educational measurement. It…
Energy Technology Data Exchange (ETDEWEB)
Crawfis, R.A.
1996-03-01
This paper presents a new technique for representing multivalued data sets defined on an integer lattice. It extends the state-of-the-art in volume rendering to include nonhomogeneous volume representations. That is, volume rendering of materials with very fine detail (e.g. translucent granite) within a voxel. Multivariate volume rendering is achieved by introducing controlled amounts of noise within the volume representation. Varying the local amount of noise within the volume is used to represent a separate scalar variable. The technique can also be used in image synthesis to create more realistic clouds and fog.
Ovarian volume throughout life
DEFF Research Database (Denmark)
Kelsey, Thomas W; Dodwell, Sarah K; Wilkinson, A Graham
2013-01-01
cancer. To date there is no normative model of ovarian volume throughout life. By searching the published literature for ovarian volume in healthy females, and using our own data from multiple sources (combined n=59,994) we have generated and robustly validated the first model of ovarian volume from...... to about 2.8 mL (95% CI 2.7-2.9 mL) at the menopause and smaller volumes thereafter. Our model allows us to generate normal values and ranges for ovarian volume throughout life. This is the first validated normative model of ovarian volume from conception to old age; it will be of use in the diagnosis...
An average salary: approaches to the index determination
Directory of Open Access Journals (Sweden)
T. M. Pozdnyakova
2017-01-01
Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of
Redshift drift in an inhomogeneous universe: averaging and the backreaction conjecture
Energy Technology Data Exchange (ETDEWEB)
Koksbang, S.M.; Hannestad, S., E-mail: koksbang@phys.au.dk, E-mail: sth@phys.au.dk [Department of Physics and Astronomy, Aarhus University, 8000 Aarhus C (Denmark)
2016-01-01
An expression for the average redshift drift in a statistically homogeneous and isotropic dust universe is given. The expression takes the same form as the expression for the redshift drift in FLRW models. It is used for a proof-of-principle study of the effects of backreaction on redshift drift measurements by combining the expression with two-region models. The study shows that backreaction can lead to positive redshift drift at low redshifts, exemplifying that a positive redshift drift at low redshifts does not require dark energy. Moreover, the study illustrates that models without a dark energy component can have an average redshift drift observationally indistinguishable from that of the standard model according to the currently expected precision of ELT measurements. In an appendix, spherically symmetric solutions to Einstein's equations with inhomogeneous dark energy and matter are used to study deviations from the average redshift drift and effects of local voids.
Radon and radon daughters indoors, problems in the determination of the annual average
International Nuclear Information System (INIS)
Swedjemark, G.A.
1984-01-01
The annual average of the concentration of radon and radon daughters in indoor air is required both in studies such as determining the collective dose to a population and at comparing with limits. Measurements are often carried out during a time period shorter than a year for practical reasons. Methods for estimating the uncertainties due to temporal variations in an annual average calculated from measurements carried out during various lengths of the sampling periods. These methods have been applied to the results from long-term measurements of radon-222 in a few houses. The possibilities to use correction factors in order to get a more adequate annual average have also been studied and some examples have been given. (orig.)
Investigation of the impact of additional traffic volumes on existing arterials
Energy Technology Data Exchange (ETDEWEB)
Selman, W.A.
1986-01-01
New developments attract a large number of additional vehicular trips to the areas in which they are located. An assessment of the impact of these additional trips on surrounding street systems is often needed. The objective of this study is the development of a regression model that would assess this impact. The model predicts increases in average vehicular delay due to specific increases in traffic volumes. It should be very useful since it requires less data than computerized simulation models, it does not require the use of a computer, and it is expected to give reasonably accurate estimates of delays. To develop the proposed model, traffic flow characteristics on six arterials were simulated. The traffic flow model TRANSYT-7F was used to simulate traffic operations on the arterials for four values of increased volumes. These simulations generated the necessary data for the development of the prediction model. Using existing traffic characteristics as predictors, four regression equations were developed and validated. The equations predict the increase in average delay in seconds per vehicle, corresponding to increases in volume of 200, 300, 500, and 700 vehicles per hour.
Condition monitoring of gearboxes using synchronously averaged electric motor signals
Ottewill, J. R.; Orkisz, M.
2013-07-01
Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different
Rescuing Collective Wisdom when the Average Group Opinion Is Wrong
Directory of Open Access Journals (Sweden)
Andres Laan
2017-11-01
Full Text Available The total knowledge contained within a collective supersedes the knowledge of even its most intelligent member. Yet the collective knowledge will remain inaccessible to us unless we are able to find efficient knowledge aggregation methods that produce reliable decisions based on the behavior or opinions of the collective’s members. It is often stated that simple averaging of a pool of opinions is a good and in many cases the optimal way to extract knowledge from a crowd. The method of averaging has been applied to analysis of decision-making in very different fields, such as forecasting, collective animal behavior, individual psychology, and machine learning. Two mathematical theorems, Condorcet’s theorem and Jensen’s inequality, provide a general theoretical justification for the averaging procedure. Yet the necessary conditions which guarantee the applicability of these theorems are often not met in practice. Under such circumstances, averaging can lead to suboptimal and sometimes very poor performance. Practitioners in many different fields have independently developed procedures to counteract the failures of averaging. We review such knowledge aggregation procedures and interpret the methods in the light of a statistical decision theory framework to explain when their application is justified. Our analysis indicates that in the ideal case, there should be a matching between the aggregation procedure and the nature of the knowledge distribution, correlations, and associated error costs. This leads us to explore how machine learning techniques can be used to extract near-optimal decision rules in a data-driven manner. We end with a discussion of open frontiers in the domain of knowledge aggregation and collective intelligence in general.
Dowdall, A; Murphy, P; Pollard, D; Fenton, D
2017-04-01
In 2002, a National Radon Survey (NRS) in Ireland established that the geographically weighted national average indoor radon concentration was 89 Bq m -3 . Since then a number of developments have taken place which are likely to have impacted on the national average radon level. Key among these was the introduction of amending Building Regulations in 1998 requiring radon preventive measures in new buildings in High Radon Areas (HRAs). In 2014, the Irish Government adopted the National Radon Control Strategy (NRCS) for Ireland. A knowledge gap identified in the NRCS was to update the national average for Ireland given the developments since 2002. The updated national average would also be used as a baseline metric to assess the effectiveness of the NRCS over time. A new national survey protocol was required that would measure radon in a sample of homes representative of radon risk and geographical location. The design of the survey protocol took into account that it is not feasible to repeat the 11,319 measurements carried out for the 2002 NRS due to time and resource constraints. However, the existence of that comprehensive survey allowed for a new protocol to be developed, involving measurements carried out in unbiased randomly selected volunteer homes. This paper sets out the development and application of that survey protocol. The results of the 2015 survey showed that the current national average indoor radon concentration for homes in Ireland is 77 Bq m -3 , a decrease from the 89 Bq m -3 reported in the 2002 NRS. Analysis of the results by build date demonstrate that the introduction of the amending Building Regulations in 1998 have led to a reduction in the average indoor radon level in Ireland. Copyright © 2016 Elsevier Ltd. All rights reserved.
Xu, Zhoubing; Gertz, Adam L; Burke, Ryan P; Bansal, Neil; Kang, Hakmook; Landman, Bennett A; Abramson, Richard G
2016-10-01
Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomic structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically acquired computed tomography (CT) scans. Under an institutional review board approval, we obtained 294 de-identified (Health Insurance Portability and Accountability Act-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1 - manual segmentation of all scans, Pipeline 2 - automated segmentation of all scans, Pipeline 3 - automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, and Pipelines 4 and 5 - volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracies of Pipelines 2-5 (Dice similarity coefficient, Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1-5. Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation of 23.7 cm(3), and time cost of 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient of 0.98, absolute deviation of 46.92 cm(3), and time cost of 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All
International Nuclear Information System (INIS)
Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.; McDuffie, P.N.
1995-11-01
With the issuance of the final Decommissioning Rule (July 27, 1988), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review and reevaluation of the '978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ''green field'' condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities
Energy Technology Data Exchange (ETDEWEB)
Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.; McDuffie, P.N.
1995-11-01
With the issuance of the final Decommissioning Rule (July 27, 1998), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review and reevaluation of the 1978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ``green field`` condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities.
Energy Technology Data Exchange (ETDEWEB)
Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.; McDuffie, P.N. [Pacific Northwest Lab., Richland, WA (United States)
1995-11-01
With the issuance of the final Decommissioning Rule (July 27, 1988), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review and reevaluation of the {prime}978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ``green field`` condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities.
International Nuclear Information System (INIS)
Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.; McDuffie, P.N.
1995-11-01
With the issuance of the final Decommissioning Rule (July 27, 1998), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review and reevaluation of the 1978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ''green field'' condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities
2009-09-01
reaction 3 310.90 Alcohol dependence syndrome 2 303.90 Other and unspecified alcohol dependence 1 305.90 Other, mixed, or unspecified drug ...generation, nonsedating antihistamine which enhances the range of antihistamines available in the AS. It also satisfies the Clinical Requirement
SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging
Energy Technology Data Exchange (ETDEWEB)
Diamant, A; Ybarra, N; Seuntjens, J [McGill University, Montreal, Quebec (Canada); El Naqa, I [University of Michigan, Ann Arbor, MI (United States)
2016-06-15
Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigated a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible
Microchannel-cooled heatsinks for high-average-power laser diode arrays
Benett, William J.; Freitas, Barry L.; Ciarlo, Dino R.; Beach, Raymond J.; Sutton, Steven B.; Emanuel, Mark A.; Solarz, Richard W.
1993-11-01
Detailed performance results for an efficient and low thermal impedance laser diode array heatsink are presented. High duty factor and even cw operation of fully filled laser diode arrays at high stacking densities are enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using an anisotropic chemical etching process. A modular rack-and- stack architecture is adopted for heatsink design, allowing arbitrarily large 2-D arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel heatsinks is ideally suited to pump array requirements for high average power crystalline lasers because of the stringent temperature demands that are required to efficiently couple diode light to several-nanometer-wide absorption features characteristic of lasing ions in crystals.
DEFF Research Database (Denmark)
Mogensen, O.; Sørensen, Flemming Brandt; Bichel, P.
1999-01-01
We evaluated the following nine parameters with respect to their prognostic value in females with endometrial cancer: four stereologic parameters [mean nuclear volume (MNV), nuclear volume fraction, nuclear index and mitotic index], the immunohistochemical expression of cancer antigen (CA125...
van Wee, B.; Rietveld, P.; Meurs, H.
2006-01-01
Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably
2010-07-01
... section; or (ii) For alcohol-fueled model types, the fuel economy value calculated for that model type in...) For alcohol dual fuel model types, for model years 1993 through 2019, the harmonic average of the... combined model type fuel economy value for operation on alcohol fuel as determined in § 600.208-12(b)(5)(ii...
METHODS OF CONTROLLING THE AVERAGE DIAMETER OF THE THREAD WITH ASYMMETRICAL PROFILE
Directory of Open Access Journals (Sweden)
L. M. Aliomarov
2015-01-01
Full Text Available To handle the threaded holes in hard materials made of marine machinery, operating at high temperatures, heavy loads and in aggressive environments, the authors have developed the combined tool core drill -tap with a special cutting scheme, which has an asymmetric thread profile on the tap part. In order to control the average diameter of the thread of tap part of the combined tool was used the method three wires, which allows to make continuous measurement of the average diameter of the thread along the entire profile. Deviation from the average diameter from the sample is registered by inductive sensor and is recorded by the recorder. In the work are developed and presented control schemes of the average diameter of the threads with a symmetrical and asymmetrical profile. On the basis of these schemes are derived formulas for calculating the theoretical option to set the wires in the thread profile in the process of measuring the average diameter. Conducted complex research and the introduction of the combined instrument core drill-tap in the production of products of marine engineering, shipbuilding, ship repair power plants made of hard materials showed a high efficiency of the proposed technology for the processing of high-quality small-diameter threaded holes that meet modern requirements.
The metric geometric mean transference and the problem of the average eye
Directory of Open Access Journals (Sweden)
W. F. Harris
2008-12-01
Full Text Available An average refractive error is readily obtained as an arithmetic average of refractive errors. But how does one characterize the first-order optical character of an average eye? Solutions have been offered including via the exponential-mean-log transference. The exponential-mean-log transference ap-pears to work well in practice but there is the niggling problem that the method does not work with all optical systems. Ideally one would like to be able to calculate an average for eyes in exactly the same way for all optical systems. This paper examines the potential of a relatively newly described mean, the metric geometric mean of positive definite (and, therefore, symmetric matrices. We extend the definition of the metric geometric mean to matrices that are not symmetric and then apply it to ray transferences of optical systems. The metric geometric mean of two transferences is shown to satisfy the requirement that symplecticity be pre-served. Numerical examples show that the mean seems to give a reasonable average for two eyes. Unfortunately, however, what seem reasonable generalizations to the mean of more than two eyes turn out not to be satisfactory in general. These generalizations do work well for thin systems. One concludes that, unless other generalizations can be found, the metric geometric mean suffers from more disadvantages than the exponential-mean-logarithm and has no advantages over it.
SEASONAL AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN
Directory of Open Access Journals (Sweden)
VIGH MELINDA
2015-03-01
Full Text Available The Râul Negru hydrographic basin is a well individualised physical-geographical unit inside the Braşov Depression. The flow is controlled by six hydrometric stations placed on the main river and on two important tributaries. The data base for seasonal flow analysis contains the discharges from 1950-2012. The results of data analysis show that there significant space-time differences between multiannual seasonal averages. Some interesting conclusions can be obtained by comparing abundant and scarce periods. Flow analysis was made using seasonal charts Q = f(T. The similarities come from the basin’s relative homogeneity, and the differences from flow’s evolution and trend. Flow variation is analysed using variation coefficient. In some cases appear significant Cv values differences. Also, Cv values trends are analysed according to basins’ average altitude.
Kumaraswamy autoregressive moving average models for double bounded environmental data
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Thermal effects in high average power optical parametric amplifiers.
Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas
2013-03-01
Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.
Partial Averaged Navier-Stokes approach for cavitating flow
International Nuclear Information System (INIS)
Zhang, L; Zhang, Y N
2015-01-01
Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results
Results from the average power laser experiment photocathode injector test
International Nuclear Information System (INIS)
Dowell, D.H.; Bethel, S.Z.; Friddell, K.D.
1995-01-01
Tests of the electron beam injector for the Boeing/Los Alamos Average Power Laser Experiment (APLE) have demonstrated first time operation of a photocathode RF gun accelerator at 25% duty factor. This exceeds previous photocathode operation by three orders of magnitude. The success of these tests was dependent upon the development of reliable and efficient photocathode preparation and processing. This paper describes the fabrication details for photocathodes with quantum efficiencies up to 12% which were used during electron beam operation. Measurements of photocathode lifetime as it depends upon the presence of water vapor are also presented. Observations of photocathode quantum efficiency rejuvenation and extended lifetime in the RF cavities are described. The importance of these effects upon photocathode lifetime during high average power operation are discussed. ((orig.))
Sample size for estimating average productive traits of pigeon pea
Directory of Open Access Journals (Sweden)
Giovani Facco
2016-04-01
Full Text Available ABSTRACT: The objectives of this study were to determine the sample size, in terms of number of plants, needed to estimate the average values of productive traits of the pigeon pea and to determine whether the sample size needed varies between traits and between crop years. Separate uniformity trials were conducted in 2011/2012 and 2012/2013. In each trial, 360 plants were demarcated, and the fresh and dry masses of roots, stems, and leaves and of shoots and the total plant were evaluated during blossoming for 10 productive traits. Descriptive statistics were calculated, normality and randomness were checked, and the sample size was calculated. There was variability in the sample size between the productive traits and crop years of the pigeon pea culture. To estimate the averages of the productive traits with a 20% maximum estimation error and 95% confidence level, 70 plants are sufficient.
Average diurnal variation of summer lightning over the Florida peninsula
Maier, L. M.; Krider, E. P.; Maier, M. W.
1984-01-01
Data derived from a large network of electric field mills are used to determine the average diurnal variation of lightning in a Florida seacoast environment. The variation at the NASA Kennedy Space Center and the Cape Canaveral Air Force Station area is compared with standard weather observations of thunder, and the variation of all discharges in this area is compared with the statistics of cloud-to-ground flashes over most of the South Florida peninsula and offshore waters. The results show average diurnal variations that are consistent with statistics of thunder start times and the times of maximum thunder frequency, but that the actual lightning tends to stop one to two hours before the recorded thunder. The variation is also consistent with previous determinations of the times of maximum rainfall and maximum rainfall rate.
The B-dot Earth Average Magnetic Field
Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon
2013-01-01
The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.
Pulsar average waveforms and hollow cone beam models
Backer, D. C.
1975-01-01
An analysis of pulsar average waveforms at radio frequencies from 40 MHz to 15 GHz is presented. The analysis is based on the hypothesis that the observer sees one cut of a hollow-cone beam pattern and that stationary properties of the emission vary over the cone. The distributions of apparent cone widths for different observed forms of the average pulse profiles (single, double/unresolved, double/resolved, triple and multiple) are in modest agreement with a model of a circular hollow-cone beam with random observer-spin axis orientation, a random cone axis-spin axis alignment, and a small range of physical hollow-cone parameters for all objects.
Using Mobile Device Samples to Estimate Traffic Volumes
2017-12-01
In this project, TTI worked with StreetLight Data to evaluate a beta version of its traffic volume estimates derived from global positioning system (GPS)-based mobile devices. TTI evaluated the accuracy of average annual daily traffic (AADT) volume :...
The definition and computation of average neutron lifetimes
International Nuclear Information System (INIS)
Henry, A.F.
1983-01-01
A precise physical definition is offered for a class of average lifetimes for neutrons in an assembly of materials, either multiplying or not, or if the former, critical or not. A compact theoretical expression for the general member of this class is derived in terms of solutions to the transport equation. Three specific definitions are considered. Particular exact expressions for these are derived and reduced to simple algebraic formulas for one-group and two-group homogeneous bare-core models
A simple consensus algorithm for distributed averaging in random ...
Indian Academy of Sciences (India)
Distributed averaging in random geographical networks. It can be simply proved that for the values of the uniform step size σ in the range. (0,1/kmax], with kmax being the maximum degree of the graph, the above system is asymptotically globally convergent to [17]. ∀i; lim k→∞ xi (k) = α = 1. N. N. ∑ i=1 xi (0),. (3) which is ...
Characterizations of Sobolev spaces via averages on balls
Czech Academy of Sciences Publication Activity Database
Dai, F.; Gogatishvili, Amiran; Yang, D.; Yuan, W.
2015-01-01
Roč. 128, November (2015), s. 86-99 ISSN 0362-546X R&D Projects: GA ČR GA13-14743S Institutional support: RVO:67985840 Keywords : Sobolev space * average on ball * difference * Euclidean space * space of homogeneous type Subject RIV: BA - General Math ematics Impact factor: 1.125, year: 2015 http://www.sciencedirect.com/science/article/pii/S0362546X15002618
Forecasting stock market averages to enhance profitable trading strategies
Haefke, Christian; Helmenstein, Christian
1995-01-01
In this paper we design a simple trading strategy to exploit the hypothesized distinct informational content of the arithmetic and geometric mean. The rejection of cointegration between the two stock market indicators supports this conjecture. The profits generated by this cheaply replicable trading scheme cannot be expected to persist. Therefore we forecast the averages using autoregressive linear and neural network models to gain a competitive advantage relative to other investors. Refining...
Average Case Analysis of Java 7's Dual Pivot Quicksort
Wild, Sebastian; Nebel, Markus E.
2013-01-01
Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting method for Oracle's Java 7 runtime library. The decision for the change was based on empirical studies showing that on average, the new algorithm is faster than the formerly used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot approach, an idea that was considered not promising by several theoretical studies in the past. In this paper, we identify the reason for this unexpe...
Modeling methane emission via the infinite moving average process
Czech Academy of Sciences Publication Activity Database
Jordanova, D.; Dušek, Jiří; Stehlík, M.
2013-01-01
Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013
Averaging underwater noise levels for environmental assessment of shipping
Merchant, Nathan D.; Blondel, Philippe; Dakin, D. Tom; Dorocicz, John
2012-01-01
Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ∼ 10 7 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels v...
Light-cone averages in a Swiss-cheese universe
International Nuclear Information System (INIS)
Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino
2008-01-01
We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaitre-Tolman-Bondi solution of Einstein's equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the ΛCDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w 0 and w a follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model
Averaging approximation to singularly perturbed nonlinear stochastic wave equations
Lv, Yan; Roberts, A. J.
2012-06-01
An averaging method is applied to derive effective approximation to a singularly perturbed nonlinear stochastic damped wave equation. Small parameter ν > 0 characterizes the singular perturbation, and να, 0 ⩽ α ⩽ 1/2, parametrizes the strength of the noise. Some scaling transformations and the martingale representation theorem yield the effective approximation, a stochastic nonlinear heat equation, for small ν in the sense of distribution.
Analytical expressions for conditional averages: A numerical test
DEFF Research Database (Denmark)
Pécseli, H.L.; Trulsen, J.
1991-01-01
. Alternatively, for time stationary and homogeneous turbulence, analytical expressions, involving higher order correlation functions R(n)(r, t) = , can be derived for the conditional averages. These expressions have the form of series expansions, which have...... to be truncated for practical applications. The convergence properties of these series are not known, except in the limit of Gaussian statistics. By applying the analysis to numerically simulated ion acoustic turbulence, we demonstrate that by keeping two or three terms in these series an acceptable approximation...
Average Likelihood Methods for Code Division Multiple Access (CDMA)
2014-05-01
the number of unknown variables grows, the averaging process becomes an extremely complex task. In the multiuser detection , a closely related problem...Theoretical Background The classification of DS/CDMA signals should not be confused with the problem of multiuser detection . The multiuser detection deals...beginning of the sequence. For simplicity, our approach will use similar assumptions to those used in multiuser detection , i.e., chip
General and Local: Averaged k-Dependence Bayesian Classifiers
Directory of Open Access Journals (Sweden)
Limin Wang
2015-06-01
Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.
On the average uncertainty for systems with nonlinear coupling
Nelson, Kenric P.; Umarov, Sabir R.; Kon, Mark A.
2017-02-01
The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale (i.e. at the width of the distribution). The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Rényi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.
Averaging kernels for DOAS total-column satellite retrievals
Directory of Open Access Journals (Sweden)
H. J. Eskes
2003-01-01
Full Text Available The Differential Optical Absorption Spectroscopy (DOAS method is used extensively to retrieve total column amounts of trace gases based on UV-visible measurements of satellite spectrometers, such as ERS-2 GOME. In practice the sensitivity of the instrument to the tracer density is strongly height dependent, especially in the troposphere. The resulting tracer profile dependence may introduce large systematic errors in the retrieved columns that are difficult to quantify without proper additional information, as provided by the averaging kernel (AK. In this paper we discuss the DOAS retrieval method in the context of the general retrieval theory as developed by Rodgers. An expression is derived for the DOAS AK for optically thin absorbers. It is shown that the comparison with 3D chemistry-transport models and independent profile measurements, based on averaging kernels, is no longer influenced by errors resulting from a priori profile assumptions. The availability of averaging kernel information as part of the total column retrieval product is important for the interpretation of the observations, and for applications like chemical data assimilation and detailed satellite validation studies.
Directory of Open Access Journals (Sweden)
Yu-Chia Chang
2008-01-01
Full Text Available Three cruises with shipboard Acoustic Doppler Current Profiler (ADCP were performed along a transect across the Peng-hu Channel (PHC in the Taiwan Strait during 2003 - 2004 in order to investigate the feasibility and accuracy of the phase-averaging method to eliminate tidal components from shipboard ADCP measurement of currents. In each cruise measurement was repeated a number of times along the transect with a specified time lag of either 5, 6.21, or 8 hr, and the repeated data at the same location were averaged to eliminate the tidal currents; this is the so-called ¡§phase-averaging method¡¨. We employed 5-phase-averaging, 4-phase-averaging, 3-phase-averaging, and 2-phase-averaging methods in this study. The residual currents and volume transport of the PHC derived from various phase-averaging methods were intercompared and were also compared with results of the least-square harmonic reduction method proposed by Simpson et al. (1990 and the least-square interpolation method using Gaussian function (Wang et al. 2004. The estimated uncertainty of the residual flow through the PHC derived from the 5-phase-averaging, 4-phase-averaging, 3-phase-averaging, and 2-phase-averaging methods is 0.3, 0.3, 1.3, and 4.6 cm s-1, respectively. Procedures for choosing a best phase average method to remove tidal currents in any particular region are also suggested.
Time-dependent density functional theory with twist-averaged boundary conditions
Schuetrumpf, B.; Nazarewicz, W.; Reinhard, P.-G.
2016-05-01
Background: Time-dependent density functional theory is widely used to describe excitations of many-fermion systems. In its many applications, three-dimensional (3D) coordinate-space representation is used, and infinite-domain calculations are limited to a finite volume represented by a spatial box. For finite quantum systems (atoms, molecules, nuclei, hadrons), the commonly used periodic or reflecting boundary conditions introduce spurious quantization of the continuum states and artificial reflections from boundary; hence, an incorrect treatment of evaporated particles. Purpose: The finite-volume artifacts for finite systems can be practically cured by invoking an absorbing potential in a certain boundary region sufficiently far from the described system. However, such absorption cannot be applied in the calculations of infinite matter (crystal electrons, quantum fluids, neutron star crust), which suffer from unphysical effects stemming from a finite computational box used. Here, twist-averaged boundary conditions (TABC) have been used successfully to diminish the finite-volume effects. In this work, we extend TABC to time-dependent modes. Method: We use the 3D time-dependent density functional framework with the Skyrme energy density functional. The practical calculations are carried out for small- and large-amplitude electric dipole and quadrupole oscillations of 16O. We apply and compare three kinds of boundary conditions: periodic, absorbing, and twist-averaged. Results: Calculations employing absorbing boundary conditions (ABC) and TABC are superior to those based on periodic boundary conditions. For low-energy excitations, TABC and ABC variants yield very similar results. With only four twist phases per spatial direction in TABC, one obtains an excellent reduction of spurious fluctuations. In the nonlinear regime, one has to deal with evaporated particles. In TABC, the floating nucleon gas remains in the box; the amount of nucleons in the gas is found to be
Hull, Elizabeth; Dick, Jeremy
2011-01-01
Written for those who want to develop their knowledge of requirements engineering process, whether practitioners or students.Using the latest research and driven by practical experience from industry, Requirements Engineering gives useful hints to practitioners on how to write and structure requirements. It explains the importance of Systems Engineering and the creation of effective solutions to problems. It describes the underlying representations used in system modeling and introduces the UML2, and considers the relationship between requirements and modeling. Covering a generic multi-layer r
Wiegers, Karl E
2003-01-01
Without formal, verifiable software requirements-and an effective system for managing them-the programs that developers think they've agreed to build often will not be the same products their customers are expecting. In SOFTWARE REQUIREMENTS, Second Edition, requirements engineering authority Karl Wiegers amplifies the best practices presented in his original award-winning text?now a mainstay for anyone participating in the software development process. In this book, you'll discover effective techniques for managing the requirements engineering process all the way through the development cy
Functional intravascular volume deficit in patients before surgery
DEFF Research Database (Denmark)
Bundgaard-Nielsen, M; Jørgensen, C C; Secher, N H
2010-01-01
limited data on the volume required to establish a maximal SV before the start of surgery. Therefore, we estimated the occurrence and size of the potential functional intravascular volume deficit in surgical patients. METHODS: Patients scheduled for mastectomy (n=20), open radical prostatectomy (n=20......BACKGROUND: Stroke volume (SV) maximization with a colloid infusion, referred to as individualized goal-directed therapy, improves outcome in high-risk surgery. The fraction of patients who need intravascular volume to establish a maximal SV has, however, not been evaluated, and there are only...... deficit. RESULTS: Forty-two (70%) of the patients needed volume to establish a maximal SV. For the patients needing volume, the required amount was median 200 ml (range 200-600 ml), with no significant difference between the three groups of patients. The required volume was >or=400 ml in nine patients (15...
Proposed Volume Standards for 2018, and the Biomass-Based Diesel Volume for 2019
EPA proposed volume requirements under the Renewable Fuel Standard (RFS) program for 2018 for cellulosic biofuel, biomass-based diesel, advanced biofuel, and total renewable fuel, and biomass-based diesel for 2019 under the RFS.
Estimation of Daily Average Downward Shortwave Radiation over Antarctica
Directory of Open Access Journals (Sweden)
Yingji Zhou
2018-03-01
Full Text Available Surface shortwave (SW irradiation is the primary driving force of energy exchange in the atmosphere and land interface. The global climate is profoundly influenced by irradiation changes due to the special climatic condition in Antarctica. Remote-sensing retrieval can offer only the instantaneous values in an area, whilst daily cycle and average values are necessary for further studies and applications, including climate change, ecology, and land surface process. When considering the large values of and small diurnal changes of solar zenith angle and cloud coverage, we develop two methods for the temporal extension of remotely sensed downward SW irradiance over Antarctica. The first one is an improved sinusoidal method, and the second one is an interpolation method based on cloud fraction change. The instantaneous irradiance data and cloud products are used in both methods to extend the diurnal cycle, and obtain the daily average value. Data from South Pole and Georg von Neumayer stations are used to validate the estimated value. The coefficient of determination (R2 between the estimated daily averages and the measured values based on the first method is 0.93, and the root mean square error (RMSE is 32.21 W/m2 (8.52%. As for the traditional sinusoidal method, the R2 and RMSE are 0.68 and 70.32 W/m2 (18.59%, respectively The R2 and RMSE of the second method are 0.96 and 25.27 W/m2 (6.98%, respectively. These values are better than those of the traditional linear interpolation (0.79 and 57.40 W/m2 (15.87%.
On Averaging Timescales for the Surface Energy Budget Closure Problem
Grachev, A. A.; Fairall, C. W.; Persson, O. P. G.; Uttal, T.; Blomquist, B.; McCaffrey, K.
2017-12-01
An accurate determination of the surface energy budget (SEB) and all SEB components at the air-surface interface is of obvious relevance for the numerical modelling of the coupled atmosphere-land/ocean/snow system over different spatial and temporal scales, including climate modelling, weather forecasting, environmental impact studies, and many other applications. This study analyzes and discusses comprehensive measurements of the SEB and the surface energy fluxes (turbulent, radiative, and ground heat) made over different underlying surfaces based on the data collected during several field campaigns. Hourly-averaged, multiyear data sets collected at two terrestrial long-term research observatories located near the coast of the Arctic Ocean at Eureka (Canadian Archipelago) and Tiksi (East Siberia) and half-hourly averaged fluxes collected during a year-long field campaign (Wind Forecast Improvement Project 2, WFIP 2) at the Columbia River Gorge (Oregon) in areas of complex terrain. Our direct measurements of energy balance show that the sum of the turbulent sensible and latent heat fluxes systematically underestimate the available energy at half-hourly and hourly time scales by around 20-30% at these sites. This imbalance of the surface energy budget is comparable to other terrestrial sites. Surface energy balance closure is a formulation of the conservation of energy principle (the first law of thermodynamics). The lack of energy balance closure at hourly time scales is a fundamental and pervasive problem in micrometeorology and may be caused by inaccurate estimates of the energy storage terms in soils, air and biomass in the layer below the measurement height and above the heat flux plates. However, the residual energy imbalance is significantly reduced at daily and monthly timescales. Increasing the averaging time to daily scales substantially reduces the storage terms because energy locally entering the soil, air column, and vegetation in the morning is
Exploring JLA supernova data with improved flux-averaging technique
International Nuclear Information System (INIS)
Wang, Shuang; Wen, Sixiang; Li, Miao
2017-01-01
In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z _c_u_t, Δ z ) plane, where z _c_u_t and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying z _c_u_t and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z _c_u_t = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z _c_u_t ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω _m . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.
Average waiting time profiles of uniform DQDB model
Energy Technology Data Exchange (ETDEWEB)
Rao, N.S.V. [Oak Ridge National Lab., TN (United States); Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D. [Old Dominion Univ., Norfolk, VA (United States). Dept. of Computer Science
1993-09-07
The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.
Analysis of nonlinear systems using ARMA [autoregressive moving average] models
International Nuclear Information System (INIS)
Hunter, N.F. Jr.
1990-01-01
While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs
Application of autoregressive moving average model in reactor noise analysis
International Nuclear Information System (INIS)
Tran Dinh Tri
1993-01-01
The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)
Relaxing monotonicity in the identification of local average treatment effects
DEFF Research Database (Denmark)
Huber, Martin; Mellace, Giovanni
In heterogeneous treatment effect models with endogeneity, the identification of the local average treatment effect (LATE) typically relies on an instrument that satisfies two conditions: (i) joint independence of the potential post-instrument variables and the instrument and (ii) monotonicity...... of the treatment in the instrument, see Imbens and Angrist (1994). We show that identification is still feasible when replacing monotonicity by a strictly weaker local monotonicity condition. We demonstrate that the latter allows identifying the LATEs on the (i) compliers (whose treatment reacts to the instrument...
Effect of random edge failure on the average path length
Energy Technology Data Exchange (ETDEWEB)
Guo Dongchao; Liang Mangui; Li Dandan; Jiang Zhongyuan, E-mail: mgliang58@gmail.com, E-mail: 08112070@bjtu.edu.cn [Institute of Information Science, Beijing Jiaotong University, 100044, Beijing (China)
2011-10-14
We study the effect of random removal of edges on the average path length (APL) in a large class of uncorrelated random networks in which vertices are characterized by hidden variables controlling the attachment of edges between pairs of vertices. A formula for approximating the APL of networks suffering random edge removal is derived first. Then, the formula is confirmed by simulations for classical ER (Erdoes and Renyi) random graphs, BA (Barabasi and Albert) networks, networks with exponential degree distributions as well as random networks with asymptotic power-law degree distributions with exponent {alpha} > 2. (paper)
Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms
Samir Khaled Safi
2014-01-01
The autocorrelation function (ACF) measures the correlation between observations at different distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q). We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj)=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,&hellip...
Increasing PS-SDOCT SNR using correlated coherent averaging
Petrie, Tracy C.; Ramamoorthy, Sripriya; Jacques, Steven L.; Nuttall, Alfred L.
2013-03-01
Using data from our previously described otoscope1 that uses 1310 nm phase-sensitive spectral domain optical coherence tomography (PS-SDOCT), we demonstrate a software technique for improving the signal-to-noise (SNR). This method is a software post-processing algorithm applicable to generic PS-SDOCT data describing phase versus time at a specific depth position. By sub-sampling the time trace and shifting the phase of the subsamples to maximize their correlation, the subsamples can be coherently averaged, which increases the SNR.
Stochastic Optimal Prediction with Application to Averaged Euler Equations
Energy Technology Data Exchange (ETDEWEB)
Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2017-04-24
Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.
Image Denoising Using Interquartile Range Filter with Local Averaging
Jassim, Firas Ajil
2013-01-01
Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...
Updated precision measurement of the average lifetime of B hadrons
Abreu, P; Adye, T; Agasi, E; Ajinenko, I; Aleksan, Roy; Alekseev, G D; Alemany, R; Allport, P P; Almehed, S; Amaldi, Ugo; Amato, S; Andreazza, A; Andrieux, M L; Antilogus, P; Apel, W D; Arnoud, Y; Åsman, B; Augustin, J E; Augustinus, A; Baillon, Paul; Bambade, P; Barate, R; Barbi, M S; Barbiellini, Guido; Bardin, Dimitri Yuri; Baroncelli, A; Bärring, O; Barrio, J A; Bartl, Walter; Bates, M J; Battaglia, Marco; Baubillier, M; Baudot, J; Becks, K H; Begalli, M; Beillière, P; Belokopytov, Yu A; Benvenuti, Alberto C; Berggren, M; Bertrand, D; Bianchi, F; Bigi, M; Bilenky, S M; Billoir, P; Bloch, D; Blume, M; Blyth, S; Bolognese, T; Bonesini, M; Bonivento, W; Booth, P S L; Borisov, G; Bosio, C; Bosworth, S; Botner, O; Boudinov, E; Bouquet, B; Bourdarios, C; Bowcock, T J V; Bozzo, M; Branchini, P; Brand, K D; Brenke, T; Brenner, R A; Bricman, C; Brillault, L; Brown, R C A; Brückman, P; Brunet, J M; Bugge, L; Buran, T; Burgsmüller, T; Buschmann, P; Buys, A; Cabrera, S; Caccia, M; Calvi, M; Camacho-Rozas, A J; Camporesi, T; Canale, V; Canepa, M; Cankocak, K; Cao, F; Carena, F; Carroll, L; Caso, Carlo; Castillo-Gimenez, M V; Cattai, A; Cavallo, F R; Cerrito, L; Chabaud, V; Charpentier, P; Chaussard, L; Chauveau, J; Checchia, P; Chelkov, G A; Chen, M; Chierici, R; Chliapnikov, P V; Chochula, P; Chorowicz, V; Chudoba, J; Cindro, V; Collins, P; Contreras, J L; Contri, R; Cortina, E; Cosme, G; Cossutti, F; Crawley, H B; Crennell, D J; Crosetti, G; Cuevas-Maestro, J; Czellar, S; Dahl-Jensen, Erik; Dahm, J; D'Almagne, B; Dam, M; Damgaard, G; Dauncey, P D; Davenport, Martyn; Da Silva, W; Defoix, C; Deghorain, A; Della Ricca, G; Delpierre, P A; Demaria, N; De Angelis, A; de Boer, Wim; De Brabandere, S; De Clercq, C; La Vaissière, C de; De Lotto, B; De Min, A; De Paula, L S; De Saint-Jean, C; Dijkstra, H; Di Ciaccio, Lucia; Djama, F; Dolbeau, J; Dönszelmann, M; Doroba, K; Dracos, M; Drees, J; Drees, K A; Dris, M; Dufour, Y; Edsall, D M; Ehret, R; Eigen, G; Ekelöf, T J C; Ekspong, Gösta; Elsing, M; Engel, J P; Ershaidat, N; Erzen, B; Espirito-Santo, M C; Falk, E; Fassouliotis, D; Feindt, Michael; Fenyuk, A; Ferrer, A; Filippas-Tassos, A; Firestone, A; Fischer, P A; Föth, H; Fokitis, E; Fontanelli, F; Formenti, F; Franek, B J; Frenkiel, P; Fries, D E C; Frodesen, A G; Frühwirth, R; Fulda-Quenzer, F; Fuster, J A; Galloni, A; Gamba, D; Gandelman, M; García, C; García, J; Gaspar, C; Gasparini, U; Gavillet, P; Gazis, E N; Gelé, D; Gerber, J P; Gibbs, M; Gokieli, R; Golob, B; Gopal, Gian P; Gorn, L; Górski, M; Guz, Yu; Gracco, Valerio; Graziani, E; Grosdidier, G; Grzelak, K; Gumenyuk, S A; Gunnarsson, P; Günther, M; Guy, J; Hahn, F; Hahn, S; Hajduk, Z; Hallgren, A; Hamacher, K; Hao, W; Harris, F J; Hedberg, V; Henriques, R P; Hernández, J J; Herquet, P; Herr, H; Hessing, T L; Higón, E; Hilke, Hans Jürgen; Hill, T S; Holmgren, S O; Holt, P J; Holthuizen, D J; Hoorelbeke, S; Houlden, M A; Hrubec, Josef; Huet, K; Hultqvist, K; Jackson, J N; Jacobsson, R; Jalocha, P; Janik, R; Jarlskog, C; Jarlskog, G; Jarry, P; Jean-Marie, B; Johansson, E K; Jönsson, L B; Jönsson, P E; Joram, Christian; Juillot, P; Kaiser, M; Kapusta, F; Karafasoulis, K; Karlsson, M; Karvelas, E; Katsanevas, S; Katsoufis, E C; Keränen, R; Khokhlov, Yu A; Khomenko, B A; Khovanskii, N N; King, B J; Kjaer, N J; Klein, H; Klovning, A; Kluit, P M; Köne, B; Kokkinias, P; Koratzinos, M; Korcyl, K; Kourkoumelis, C; Kuznetsov, O; Kramer, P H; Krammer, Manfred; Kreuter, C; Kronkvist, I J; Krumshtein, Z; Krupinski, W; Kubinec, P; Kucewicz, W; Kurvinen, K L; Lacasta, C; Laktineh, I; Lamblot, S; Lamsa, J; Lanceri, L; Lane, D W; Langefeld, P; Last, I; Laugier, J P; Lauhakangas, R; Leder, Gerhard; Ledroit, F; Lefébure, V; Legan, C K; Leitner, R; Lemoigne, Y; Lemonne, J; Lenzen, Georg; Lepeltier, V; Lesiak, T; Liko, D; Lindner, R; Lipniacka, A; Lippi, I; Lörstad, B; Loken, J G; López, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J N; Maehlum, G; Maio, A; Malychev, V; Mandl, F; Marco, J; Marco, R P; Maréchal, B; Margoni, M; Marin, J C; Mariotti, C; Markou, A; Maron, T; Martínez-Rivero, C; Martínez-Vidal, F; Martí i García, S; Masik, J; Matorras, F; Matteuzzi, C; Matthiae, Giorgio; Mazzucato, M; McCubbin, M L; McKay, R; McNulty, R; Medbo, J; Merk, M; Meroni, C; Meyer, S; Meyer, W T; Michelotto, M; Migliore, E; Mirabito, L; Mitaroff, Winfried A; Mjörnmark, U; Moa, T; Møller, R; Mönig, K; Monge, M R; Morettini, P; Müller, H; Mundim, L M; Murray, W J; Muryn, B; Myatt, Gerald; Naraghi, F; Navarria, Francesco Luigi; Navas, S; Nawrocki, K; Negri, P; Neumann, W; Nicolaidou, R; Nielsen, B S; Nieuwenhuizen, M; Nikolaenko, V; Niss, P; Nomerotski, A; Normand, Ainsley; Novák, M; Oberschulte-Beckmann, W; Obraztsov, V F; Olshevskii, A G; Onofre, A; Orava, Risto; Österberg, K; Ouraou, A; Paganini, P; Paganoni, M; Pagès, P; Palka, H; Papadopoulou, T D; Papageorgiou, K; Pape, L; Parkes, C; Parodi, F; Passeri, A; Pegoraro, M; Peralta, L; Pernegger, H; Pernicka, Manfred; Perrotta, A; Petridou, C; Petrolini, A; Petrovykh, M; Phillips, H T; Piana, G; Pierre, F; Pimenta, M; Pindo, M; Plaszczynski, S; Podobrin, O; Pol, M E; Polok, G; Poropat, P; Pozdnyakov, V; Prest, M; Privitera, P; Pukhaeva, N; Pullia, Antonio; Radojicic, D; Ragazzi, S; Rahmani, H; Ratoff, P N; Read, A L; Reale, M; Rebecchi, P; Redaelli, N G; Regler, Meinhard; Reid, D; Renton, P B; Resvanis, L K; Richard, F; Richardson, J; Rídky, J; Rinaudo, G; Ripp, I; Romero, A; Roncagliolo, I; Ronchese, P; Ronjin, V M; Roos, L; Rosenberg, E I; Rosso, E; Roudeau, Patrick; Rovelli, T; Rückstuhl, W; Ruhlmann-Kleider, V; Ruiz, A; Rybicki, K; Saarikko, H; Sacquin, Yu; Sadovskii, A; Sajot, G; Salt, J; Sánchez, J; Sannino, M; Schimmelpfennig, M; Schneider, H; Schwickerath, U; Schyns, M A E; Sciolla, G; Scuri, F; Seager, P; Sedykh, Yu; Segar, A M; Seitz, A; Sekulin, R L; Shellard, R C; Siccama, I; Siegrist, P; Simonetti, S; Simonetto, F; Sissakian, A N; Sitár, B; Skaali, T B; Smadja, G; Smirnov, N; Smirnova, O G; Smith, G R; Solovyanov, O; Sosnowski, R; Souza-Santos, D; Spassoff, Tz; Spiriti, E; Sponholz, P; Squarcia, S; Stanescu, C; Stapnes, Steinar; Stavitski, I; Stichelbaut, F; Stocchi, A; Strauss, J; Strub, R; Stugu, B; Szczekowski, M; Szeptycka, M; Tabarelli de Fatis, T; Tavernet, J P; Chikilev, O G; Tilquin, A; Timmermans, J; Tkatchev, L G; Todorov, T; Toet, D Z; Tomaradze, A G; Tomé, B; Tonazzo, A; Tortora, L; Tranströmer, G; Treille, D; Trischuk, W; Tristram, G; Trombini, A; Troncon, C; Tsirou, A L; Turluer, M L; Tyapkin, I A; Tyndel, M; Tzamarias, S; Überschär, B; Ullaland, O; Uvarov, V; Valenti, G; Vallazza, E; Van der Velde, C; van Apeldoorn, G W; van Dam, P; Van Doninck, W K; Van Eldik, J; Vassilopoulos, N; Vegni, G; Ventura, L; Venus, W A; Verbeure, F; Verlato, M; Vertogradov, L S; Vilanova, D; Vincent, P; Vitale, L; Vlasov, E; Vodopyanov, A S; Vrba, V; Wahlen, H; Walck, C; Weierstall, M; Weilhammer, Peter; Weiser, C; Wetherell, Alan M; Wicke, D; Wickens, J H; Wielers, M; Wilkinson, G R; Williams, W S C; Winter, M; Witek, M; Woschnagg, K; Yip, K; Yushchenko, O P; Zach, F; Zaitsev, A; Zalewska-Bak, A; Zalewski, Piotr; Zavrtanik, D; Zevgolatakos, E; Zimin, N I; Zito, M; Zontar, D; Zuberi, R; Zucchelli, G C; Zumerle, G; Belokopytov, Yu; Charpentier, Ph; Gavillet, Ph; Gouz, Yu; Jarlskog, Ch; Khokhlov, Yu; Papadopoulou, Th D
1996-01-01
The measurement of the average lifetime of B hadrons using inclusively reconstructed secondary vertices has been updated using both an improved processing of previous data and additional statistics from new data. This has reduced the statistical and systematic uncertainties and gives \\tau_{\\mathrm{B}} = 1.582 \\pm 0.011\\ \\mathrm{(stat.)} \\pm 0.027\\ \\mathrm{(syst.)}\\ \\mathrm{ps.} Combining this result with the previous result based on charged particle impact parameter distributions yields \\tau_{\\mathrm{B}} = 1.575 \\pm 0.010\\ \\mathrm{(stat.)} \\pm 0.026\\ \\mathrm{(syst.)}\\ \\mathrm{ps.}
Domain averaged Fermi hole analysis for open-shell systems.
Ponec, Robert; Feixas, Ferran
2009-05-14
The Article reports the extension of the new original methodology for the analysis and visualization of the bonding interactions, known as the analysis of domain averaged Fermi holes (DAFH), to open-shell systems. The proposed generalization is based on straightforward reformulation of the original approach within the framework of unrestricted Hartree-Fock (UHF) and/or Kohn-Sham (UKS) levels of the theory. The application of the new methodology is demonstrated on the detailed analysis of the picture of the bonding in several simple systems involving the doublet state of radical cation NH(3)((+)) and the triplet ground state of the O(2) molecule.
Control of average spacing of OMCVD grown gold nanoparticles
Rezaee, Asad
Metallic nanostructures and their applications is a rapidly expanding field. Nobel metals such as silver and gold have historically been used to demonstrate plasmon effects due to their strong resonances, which occur in the visible part of the electromagnetic spectrum. Localized surface plasmon resonance (LSPR) produces an enhanced electromagnetic field at the interface between a gold nanoparticle (Au NP) and the surrounding dielectric. This enhanced field can be used for metal-dielectric interfacesensitive optical interactions that form a powerful basis for optical sensing. In addition to the surrounding material, the LSPR spectral position and width depend on the size, shape, and average spacing between these particles. Au NP LSPR based sensors depict their highest sensitivity with optimized parameters and usually operate by investigating absorption peak: shifts. The absorption peak: of randomly deposited Au NPs on surfaces is mostly broad. As a result, the absorption peak: shifts, upon binding of a material onto Au NPs might not be very clear for further analysis. Therefore, novel methods based on three well-known techniques, self-assembly, ion irradiation, and organo-meta1lic chemical vapour deposition (OMCVD) are introduced to control the average-spacing between Au NPs. In addition to covalently binding and other advantages of OMCVD grown Au NPs, interesting optical features due to their non-spherical shapes are presented. The first step towards the average-spacing control is to uniformly form self-assembled monolayers (SAMs) of octadecyltrichlorosilane (OTS) as resists for OMCVD Au NPs. The formation and optimization of the OTS SAMs are extensively studied. The optimized resist SAMs are ion-irradiated by a focused ion beam (Fill) and ions generated by a Tandem accelerator. The irradiated areas are refilled with 3-mercaptopropyl-trimethoxysilane (MPTS) to provide nucleation sites for the OMCVD Au NP growth. Each step during sample preparation is monitored by
Concentration fluctuations and averaging time in vapor clouds
Wilson, David J
2010-01-01
This book contributes to more reliable and realistic predictions by focusing on sampling times from a few seconds to a few hours. Its objectives include developing clear definitions of statistical terms, such as plume sampling time, concentration averaging time, receptor exposure time, and other terms often confused with each other or incorrectly specified in hazard assessments; identifying and quantifying situations for which there is no adequate knowledge to predict concentration fluctuations in the near-field, close to sources, and far downwind where dispersion is dominated by atmospheric t
Edgeworth expansion for the pre-averaging estimator
DEFF Research Database (Denmark)
Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro
In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its as...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....
Theory of oscillations in average crisis-induced transient lifetimes.
Kacperski, K; Hołyst, J A
1999-07-01
Analytical and numerical study of the roughly periodic oscillations emerging on the background of the well-known power law governing the scaling of the average lifetimes of crisis induced chaotic transients is presented. The explicit formula giving the amplitude of "normal" oscillations in terms of the eigenvalues of unstable orbits involved in the crisis is obtained using a simple geometrical model. We also discuss the commonly encountered situation when normal oscillations appear together with "anomalous" ones caused by the fractal structure of basins of attraction.
Energy Technology Data Exchange (ETDEWEB)
Alessi, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-11-01
Pulse compressors for ultrafast lasers have been identified as a technology gap in the push towards high peak power systems with high average powers for industrial and scientific applications. Gratings for ultrashort (sub-150fs) pulse compressors are metallic and can absorb a significant percentage of laser energy resulting in up to 40% loss as well as thermal issues which degrade on-target performance. We have developed a next generation gold grating technology which we have scaled to the petawatt-size. This resulted in improvements in efficiency, uniformity and processing as compared to previous substrate etched gratings for high average power. This new design has a deposited dielectric material for the grating ridge rather than etching directly into the glass substrate. It has been observed that average powers as low as 1W in a compressor can cause distortions in the on-target beam. We have developed and tested a method of actively cooling diffraction gratings which, in the case of gold gratings, can support a petawatt peak power laser with up to 600W average power. We demonstrated thermo-mechanical modeling of a grating in its use environment and benchmarked with experimental measurement. Multilayer dielectric (MLD) gratings are not yet used for these high peak power, ultrashort pulse durations due to their design challenges. We have designed and fabricated broad bandwidth, low dispersion MLD gratings suitable for delivering 30 fs pulses at high average power. This new grating design requires the use of a novel Out Of Plane (OOP) compressor, which we have modeled, designed, built and tested. This prototype compressor yielded a transmission of 90% for a pulse with 45 nm bandwidth, and free of spatial and angular chirp. In order to evaluate gratings and compressors built in this project we have commissioned a joule-class ultrafast Ti:Sapphire laser system. Combining the grating cooling and MLD technologies developed here could enable petawatt laser systems to
Averaged model to study long-term dynamics of a probe about Mercury
Tresaco, Eva; Carvalho, Jean Paulo S.; Prado, Antonio F. B. A.; Elipe, Antonio; de Moraes, Rodolpho Vilhena
2018-02-01
This paper provides a method for finding initial conditions of frozen orbits for a probe around Mercury. Frozen orbits are those whose orbital elements remain constant on average. Thus, at the same point in each orbit, the satellite always passes at the same altitude. This is very interesting for scientific missions that require close inspection of any celestial body. The orbital dynamics of an artificial satellite about Mercury is governed by the potential attraction of the main body. Besides the Keplerian attraction, we consider the inhomogeneities of the potential of the central body. We include secondary terms of Mercury gravity field from J_2 up to J_6, and the tesseral harmonics \\overline{C}_{22} that is of the same magnitude than zonal J_2. In the case of science missions about Mercury, it is also important to consider third-body perturbation (Sun). Circular restricted three body problem can not be applied to Mercury-Sun system due to its non-negligible orbital eccentricity. Besides the harmonics coefficients of Mercury's gravitational potential, and the Sun gravitational perturbation, our average model also includes Solar acceleration pressure. This simplified model captures the majority of the dynamics of low and high orbits about Mercury. In order to capture the dominant characteristics of the dynamics, short-period terms of the system are removed applying a double-averaging technique. This algorithm is a two-fold process which firstly averages over the period of the satellite, and secondly averages with respect to the period of the third body. This simplified Hamiltonian model is introduced in the Lagrange Planetary equations. Thus, frozen orbits are characterized by a surface depending on three variables: the orbital semimajor axis, eccentricity and inclination. We find frozen orbits for an average altitude of 400 and 1000 km, which are the predicted values for the BepiColombo mission. Finally, the paper delves into the orbital stability of frozen
Steady state rheology from homogeneous and locally averaged simple shear simulations
Shi, Hao; Luding, Stefan; Magnanimo, Vanessa
2017-06-01
Granular materials and particulate matter are ubiquitous in our daily life and they display interesting bulk behaviors from static to dynamic, solid to fluid or gas like states, or even all these states together. To understand how the micro structure and inter-particle forces influence the macroscopic bulk behavior is still a great challenge today. This short paper presents stress controlled homogeneous simple shear results in a 3D cuboidal box using MercuryDPM software. An improved rheological model is proposed for macroscopic friction, volume fraction and coordination number as a function of inertial number and pressure. In addition, the results are compared with the locally averaged data from steady state shear bands in a split bottom ring shear cell and very good agreement is observed in low to intermediate inertia regime at various confining pressure but not for high inertia collisional granular flow.
International Nuclear Information System (INIS)
Azcoiti, V.; Cruz, A.; Di Carlo, G.; Grillo, A.F.; Vladikas, A.
1991-01-01
We attempt to increase the efficiency of simulations of dynamical fermions on the lattice by calculating the fermionic determinant just once for all the values of the theory's gauge coupling and flavor number. Our proposal is based on the determination of an effective fermionic action by the calculation of the fermionic determinant averaged over configurations at fixed gauge energy. The feasibility of our method is justified by the observed volume dependence of the fluctuations of the logarithm of the determinant. The algorithm we have used in order to calculate the fermionic determinant, based on the determination of all the eigenvalues of the fermionic matrix at zero mass, also enables us to obtain results at any fermion mass, with a single fermionic simulation. We test the method by simulating compact lattice QED, finding good agreement with other standard calculations. New results on the phase transition of compact QED with massless fermions on 6 4 and 8 4 lattices are also presented
MONTHLY AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN
Directory of Open Access Journals (Sweden)
VIGH MELINDA
2014-03-01
Full Text Available Râul Negru hydrographic basin represents a well individualised and relatively homogenous physical-geographical unity from Braşov Depression. The flow is controlled by six hydrometric stations placed on the main collector and on two of the most powerful tributaries. Our analysis period is represented by the last 25 years (1988 - 2012 and it’s acceptable for make pertinent conclusions. The maximum discharge month is April, that it’s placed in the high flow period: March – June. Minimum discharges appear in November - because of the lack of pluvial precipitations; in January because of high solid precipitations and because of water volume retention in ice. Extreme discharge frequencies vary according to their position: in the mountain area – small basin surface; into a depression – high basin surface. Variation coefficients point out very similar variation principles, showing a relative homogeneity of flow processes.
Vibrationally averaged dipole moments of methane and benzene isotopologues
Energy Technology Data Exchange (ETDEWEB)
Arapiraca, A. F. C. [Laboratório de Átomos e Moléculas Especiais, Departamento de Física, ICEx, Universidade Federal de Minas Gerais, P. O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Centro Federal de Educação Tecnológica de Minas Gerais, Coordenação de Ciências, CEFET-MG, Campus I, 30.421-169 Belo Horizonte, MG (Brazil); Mohallem, J. R., E-mail: rachid@fisica.ufmg.br [Laboratório de Átomos e Moléculas Especiais, Departamento de Física, ICEx, Universidade Federal de Minas Gerais, P. O. Box 702, 30123-970 Belo Horizonte, MG (Brazil)
2016-04-14
DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C{sub 6}H{sub 3}D{sub 3} is about twice as large as the measured dipole moment of C{sub 6}H{sub 5}D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments.
Face averages enhance user recognition for smartphone security.
Directory of Open Access Journals (Sweden)
David J Robertson
Full Text Available Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy. In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1 and for real faces (Experiment 2: users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms
Directory of Open Access Journals (Sweden)
Samir Khaled Safi
2014-02-01
Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The autocorrelation function (ACF measures the correlation between observations at different distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q. We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,…,stt. The forms of the explicit equations depend essentially on the moving average coefficients and covariance structure of the disturbance terms. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}
Large interface simulation in an averaged two-fluid code
International Nuclear Information System (INIS)
Henriques, A.
2006-01-01
Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr
Image Compression Using Moving Average Histogram and RBF Network
Directory of Open Access Journals (Sweden)
Sandar khowaja
2016-04-01
Full Text Available Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion?s share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function. Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio, MSE (Mean Square Error, PSNR (Peak Signal to Noise Ratio, computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity
Eighth CW and High Average Power RF Workshop
2014-01-01
We are pleased to announce the next Continuous Wave and High Average RF Power Workshop, CWRF2014, to take place at Hotel NH Trieste, Trieste, Italy from 13 to 16 May, 2014. This is the eighth in the CWRF workshop series and will be hosted by Elettra - Sincrotrone Trieste S.C.p.A. (www.elettra.eu). CWRF2014 will provide an opportunity for designers and users of CW and high average power RF systems to meet and interact in a convivial environment to share experiences and ideas on applications which utilize high-power klystrons, gridded tubes, combined solid-state architectures, high-voltage power supplies, high-voltage modulators, high-power combiners, circulators, cavities, power couplers and tuners. New ideas for high-power RF system upgrades and novel ways of RF power generation and distribution will also be discussed. CWRF2014 sessions will start on Tuesday morning and will conclude on Friday lunchtime. A visit to Elettra and FERMI will be organized during the workshop. ORGANIZING COMMITTEE (OC): Al...
Average gluon and quark jet multiplicities at higher orders
Energy Technology Data Exchange (ETDEWEB)
Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics
2013-05-15
We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.
Monthly streamflow forecasting with auto-regressive integrated moving average
Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani
2017-09-01
Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.
Face averages enhance user recognition for smartphone security.
Robertson, David J; Kramer, Robin S S; Burton, A Mike
2015-01-01
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
Variability of average SUV from several hottest voxels is lower than that of SUVmax and SUVpeak
Energy Technology Data Exchange (ETDEWEB)
Laffon, E. [CHU de Bordeaux, Service de Medecine Nucleaire, Hopital du Haut-Leveque, Pessac (France); Universite de Bordeaux 2, Centre de Recherche Cardio-Thoracique, Bordeaux (France); INSERM U 1045, Centre de Recherche Cardio-Thoracique, Bordeaux (France); Lamare, F.; Clermont, H. de [CHU de Bordeaux, Service de Medecine Nucleaire, Hopital du Haut-Leveque, Pessac (France); Burger, I.A. [University Hospital of Zurich, Division of Nuclear Medicine, Department Medical Radiology, Zurich (Switzerland); Marthan, R. [Universite de Bordeaux 2, Centre de Recherche Cardio-Thoracique, Bordeaux (France); INSERM U 1045, Centre de Recherche Cardio-Thoracique, Bordeaux (France)
2014-08-15
To assess variability of the average standard uptake value (SUV) computed by varying the number of hottest voxels within an {sup 18}F-fluorodeoxyglucose ({sup 18}F-FDG)-positive lesion. This SUV metric was compared with the maximal SUV (SUV{sub max}: the hottest voxel) and peak SUV (SUV{sub peak}: SUV{sub max} and its 26 neighbouring voxels). Twelve lung cancer patients (20 lesions) were analysed using PET dynamic acquisition involving ten successive 2.5-min frames. In each frame and lesion, average SUV obtained from the N = 5, 10, 15, 20, 25 or 30 hottest voxels (SUV{sub max-N}){sub ,} SUV{sub max} and SUV{sub peak} were assessed. The relative standard deviations (SDrs) from ten frames were calculated for each SUV metric and lesion, yielding the mean relative SD from 20 lesions for each SUV metric (SDr{sub N}, SDr{sub max} and SDr{sub peak}), and hence relative measurement error and repeatability (MEr-R). For each N, SDr{sub N} was significantly lower than SDr{sub max} and SDr{sub peak}. SDr{sub N} correlated strongly with N: 6.471 x N{sup -0.103} (r = 0.994; P < 0.01). MEr-R of SUV{sub max-30} was 8.94-12.63 % (95 % CL), versus 13.86-19.59 % and 13.41-18.95 % for SUV{sub max} and SUV{sub peak} respectively. Variability of SUV{sub max-N} is significantly lower than for SUV{sub max} and SUV{sub peak}. Further prospective studies should be performed to determine the optimal total hottest volume, as voxel volume may depend on the PET system. (orig.)
Average rainwater pH, concepts of atmospheric acidity, and buffering in open systems
Liljestrand, Howard M.
The system of water equilibrated with a constant partial pressure of CO 2, as a reference point for pH acidity-alkalinity relationships, has nonvolatile acidity and alkalinity components as conservative quantities, but not [H +]. Simple algorithms are presented for the determination of the average pH for combinations of samples both above and below pH 5.6. Averaging the nonconservative quantity [H +] yields erroneously low mean pH values. To extend the open CO 2 system to include other volatile atmospheric acids and bases distributed among the gas, liquid and particulate matter phases, a theoretical framework for atmospheric acidity is presented. Within certain oxidation-reduction limitations, the total atmospheric acidity (but not free acidity) is a conservative quantity. The concept of atmospheric acidity is applied to air-water systems approximating aerosols, fogwater, cloudwater and rainwater. The buffer intensity in hydrometeors is described as a function of net strong acidity, partial pressures of acid and base gases and the water to air ratio. For high liquid to air volume ratios, the equilibrium partial pressures of trace acid and base gases are set by the pH or net acidity controlled by the nonvolatile acid and base concentrations. For low water to air volume ratios as well as stationary state systems such as precipitation scavenging with continuous emissions, the partial pressures of trace gases (NH 3, HCl, HNO 3, SO 2 and CH 3COOH) appear to be of greater or equal importance as carbonate species as buffers in the aqueous phase.
Average rainwater pH, concepts of atmospheric acidity, and buffering in open systems
Energy Technology Data Exchange (ETDEWEB)
Liljestrand, H.M.
1985-01-01
The system of water equilibrated with a constant partial pressure of CO/sub 2/, as a reference point for pH acidity-alkalinity relationships, has nonvolatile acidity and alkalinity components as conservative quantities, but not (H/sup +/). Simple algorithms are presented for the determination of the average pH for combinations of samples both above and below pH 5.6. Averaging the nonconservative quantity (H/sup +/) yields erroneously low mean pH values. To extend the open CO/sub 2/ system to include other volatile atmospheric acids and bases distributed among the gas, liquid and particulate matter phases, a theoretical framework for atmospheric acidity is presented. Within certain oxidation-reduction limitations, the total atmospheric acidity (but not free acidity) is a conservative quantity. The concept of atmospheric acidity is applied to air-water systems approximating aerosols, fogwater, cloudwater and rainwater. The buffer intensity in hydrometers is described as a function of net strong acidity, partial pressures of acid and base gases and the water to air ratio. For high liquid to air volume ratios, the equilibrium partial pressures of trace acid and base gases are set by the pH or net acidity controlled by the nonvolatile acid and base concentrations. For low water to air volume ratios as well as stationary state systems such as precipitation scavenging with continuous emissions, the partial pressures of trace gases (NH/sub 3/, HCl, NHO/sub 3/, SO/sub 2/, and CH/sub 3/COOH) appear to be of greater or equal importance as carbonate species as buffers in the aqueous phase.
Site Environmental Report for 1999 - Volume 1
Energy Technology Data Exchange (ETDEWEB)
Ruggieri, M
2000-08-12
Each year, Ernest Orlando Lawrence Berkeley National Laboratory prepares an integrated report on its environmental programs to satisfy the requirements of United States Department of Energy Order 231.1. The Site Environmental Report for 1999 is intended to summarize Berkeley Lab's compliance with environmental standards and requirements, characterize environmental management efforts through surveillance and monitoring activities, and highlight significant programs and efforts for calendar year 1999. The report is separated into two volumes. Volume I contains a general overview of the Laboratory, the status of environmental programs, and summary results from surveillance and monitoring activities. Each chapter in Volume I begins with an outline of the sections that follow, including any tables or figures found in the chapter. Readers should use section numbers (e.g., {section}1.5) as navigational tools to find topics of interest in either the printed or the electronic version of the report. Volume II contains the individual data results from monitoring programs.
Volume definition system for treatment planning
International Nuclear Information System (INIS)
Alakuijala, Jyrki; Pekkarinen, Ari; Puurunen, Harri
1997-01-01
volume definition process dramatically. Its true 3D nature allows the algorithm to find the regions quickly with high accuracy. The competitive volume growing requires only a small amount of user input for initial seeding. The simultaneous growing of multiple segments mitigates volume leaking, which is a major problem in normal region growing. The automatic tool finds body outline, air, and couch reliably in 30 s for a volume image of 30 slices. Other algorithms are almost interactive. CTV interpolation is an excellent feature for defining a CTV spanning over several slices. Real time verification in 2D and 3D visualization support the operator and thus speed up the contouring process. Conclusions: The contouring process requires a rich set of tools to comply with the multi-faceted requirements of volume definition. Body, specific anatomic volumes and target are best defined using a set of tools specifically built for that purpose. The execution speed of both the algorithms and visualization is very important for operator satisfaction
76 FR 43534 - Alternative to Minimum Days Off Requirements
2011-07-21
... number of days off specified in this paragraph, or comply with the requirements for maximum average work....205(d)(3)); or (2) comply with the requirements for maximum average work hours in Sec. 26.205(d)(7... days off specified in this paragraph, or comply with the requirements for maximum average work hours in...
Energy Technology Data Exchange (ETDEWEB)
C. K. Sinclair; P. A. Adderley; B. M. Dunham; J. C. Hansknecht; P. Hartmann; M. Poelker; J. S. Price; P. M. Rutt; W. J. Schneider; M. Steigerwald
2007-02-01
Substantially more than half of the electromagnetic nuclear physics experiments conducted at the Continuous Electron Beam Accelerator Facility of the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory) require highly polarized electron beams, often at high average current. Spin-polarized electrons are produced by photoemission from various GaAs-based semiconductor photocathodes, using circularly polarized laser light with photon energy slightly larger than the semiconductor band gap. The photocathodes are prepared by activation of the clean semiconductor surface to negative electron affinity using cesium and oxidation. Historically, in many laboratories worldwide, these photocathodes have had short operational lifetimes at high average current, and have often deteriorated fairly quickly in ultrahigh vacuum even without electron beam delivery. At Jefferson Lab, we have developed a polarized electron source in which the photocathodes degrade exceptionally slowly without electron emission, and in which ion back bombardment is the predominant mechanism limiting the operational lifetime of the cathodes during electron emission. We have reproducibly obtained cathode 1/e dark lifetimes over two years, and 1/e charge density and charge lifetimes during electron beam delivery of over 2?105???C/cm2 and 200 C, respectively. This source is able to support uninterrupted high average current polarized beam delivery to three experimental halls simultaneously for many months at a time. Many of the techniques we report here are directly applicable to the development of GaAs photoemission electron guns to deliver high average current, high brightness unpolarized beams.
Negating Tissue Contracture Improves Volume Maintenance and Longevity of In Vivo Engineered Tissues.
Lytle, Ian F; Kozlow, Jeffrey H; Zhang, Wen X; Buffington, Deborah A; Humes, H David; Brown, David L
2015-10-01
Engineering large, complex tissues in vivo requires robust vascularization to optimize survival, growth, and function. Previously, the authors used a "chamber" model that promotes intense angiogenesis in vivo as a platform for functional three-dimensional muscle and renal engineering. A silicone membrane used to define the structure and to contain the constructs is successful in the short term. However, over time, generated tissues contract and decrease in size in a manner similar to capsular contracture seen around many commonly used surgical implants. The authors hypothesized that modification of the chamber structure or internal surface would promote tissue adherence and maintain construct volume. Three chamber configurations were tested against volume maintenance. Previously studied, smooth silicone surfaces were compared to chambers modified for improved tissue adherence, with multiple transmembrane perforations or lined with a commercially available textured surface. Tissues were allowed to mature long term in a rat model, before analysis. On explantation, average tissue masses were 49, 102, and 122 mg; average volumes were 74, 158 and 176 μl; and average cross-sectional areas were 1.6, 6.7, and 8.7 mm for the smooth, perforated, and textured groups, respectively. Both perforated and textured designs demonstrated significantly greater measures than the smooth-surfaced constructs in all respects. By modifying the design of chambers supporting vascularized, three-dimensional, in vivo tissue engineering constructs, generated tissue mass, volume, and area can be maintained over a long time course. Successful progress in the scale-up of construct size should follow, leading to improved potential for development of increasingly complex engineered tissues.
Anomalous atomic volume of alpha-Pu
DEFF Research Database (Denmark)
Kollar, J.; Vitos, Levente; Skriver, Hans Lomholt
1997-01-01
We have performed full charge-density calculations for the equilibrium atomic volumes of the alpha-phase light actinide metals using the local density approximation (LDA) and the generalized gradient approximation (GGA). The average deviation between the experimental and the GGA atomic radii is 1.......3%. The comparison between the LDA and GGA results show that the anomalously large atomic volume of alpha-Pu relative to alpha-Np can be ascribed to exchange-correlation effects connected with the presence of low coordinated sites in the structure where the f electrons are close to the onset of localization...
Multifractal detrending moving-average cross-correlation analysis.
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2011-07-01
There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents h(xy) extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of h(xy)(q) since its h(xy)(2) is closest to 0.5, as expected, and
Baseline estimate of the retained gas volume in Tank 241-C-106
International Nuclear Information System (INIS)
Stewart, C.W.; Chen, G.
1998-06-01
This report presents the results of a study of the retained gas volume in Hanford Tank 241-C-106 (C-106) using the barometric pressure effect method. This estimate is required to establish the baseline conditions for sluicing the waste from C-106 into AY-102, scheduled to begin in the fall of 1998. The barometric pressure effect model is described, and the data reduction and detrending techniques are detailed. Based on the response of the waste level to the larger barometric pressure swings that occurred between October 27, 1997, and March 4, 1998, the best estimate and conservative (99% confidence) retained gas volumes in C-106 are 24 scm (840 scf) and 50 scm (1,770 scf), respectively. This is equivalent to average void fractions of 0.025 and 0.053, respectively
What Does Average Really Mean? Making Sense of Statistics
DeAngelis, Karen J.; Ayers, Steven
2009-01-01
The recent shift toward greater accountability has put many educational leaders in a position where they are expected to collect and use increasing amounts of data to inform their decision making. Yet, because many programs that prepare administrators, including school business officials, do not require a statistics course or a course that is more…
Average-case analysis of incremental topological ordering
DEFF Research Database (Denmark)
Ajwani, Deepak; Friedrich, Tobias
2010-01-01
Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...
Intrinsic Grassmann Averages for Online Linear and Robust Subspace Learning
DEFF Research Database (Denmark)
Chakraborty, Rudrasis; Hauberg, Søren; Vemuri, Baba C.
2017-01-01
proposed online subspace algorithm method on one synthetic and two real data sets. Experimental results depicting stability of our proposed method are also presented. Furthermore, on two real outlier corrupted datasets, we present comparison experiments showing lower reconstruction error using our online...... RPCA algorithm. In terms of reconstruction error and time required, both our algorithms outperform the competition....
Grade Point Averages: How Students Navigate the System
Uribe, Patricia E.; Garcia, Marco A.
2012-01-01
This case exemplifies the unintended divisive cause and effect dynamic that can occur as a direct result of a seemingly innocuous school board policy modification. A change in school board policy at a local school district in Laredo, Texas, was designed to facilitate the fulfillment of a foreign language requirement for high school students. A…
Forecasting natural gas consumption in China by Bayesian Model Averaging
Directory of Open Access Journals (Sweden)
Wei Zhang
2015-11-01
Full Text Available With rapid growth of natural gas consumption in China, it is in urgent need of more accurate and reliable models to make a reasonable forecast. Considering the limitations of the single model and the model uncertainty, this paper presents a combinative method to forecast natural gas consumption by Bayesian Model Averaging (BMA. It can effectively handle the uncertainty associated with model structure and parameters, and thus improves the forecasting accuracy. This paper chooses six variables for forecasting the natural gas consumption, including GDP, urban population, energy consumption structure, industrial structure, energy efficiency and exports of goods and services. The results show that comparing to Gray prediction model, Linear regression model and Artificial neural networks, the BMA method provides a flexible tool to forecast natural gas consumption that will have a rapid growth in the future. This study can provide insightful information on natural gas consumption in the future.
A Note on Functional Averages over Gaussian Ensembles
Directory of Open Access Journals (Sweden)
Gabriel H. Tucci
2013-01-01
Full Text Available We find a new formula for matrix averages over the Gaussian ensemble. Let H be an n×n Gaussian random matrix with complex, independent, and identically distributed entries of zero mean and unit variance. Given an n×n positive definite matrix A and a continuous function f:ℝ+→ℝ such that ∫0∞e-αt|f(t|2dt0, we find a new formula for the expectation [Tr(f(HAH*]. Taking f(x=log(1+x gives another formula for the capacity of the MIMO communication channel, and taking f(x=(1+x-1 gives the MMSE achieved by a linear receiver.
Data Point Averaging for Computational Fluid Dynamics Data
Norman, Jr., David (Inventor)
2016-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Averaged multivalued solutions and time discretization for conservation laws
International Nuclear Information System (INIS)
Brenier, Y.
1985-01-01
It is noted that the correct shock solutions can be approximated by averaging in some sense the multivalued solution given by the method of characteristics for the nonlinear scalar conservation law (NSCL). A time discretization for the NSCL equation based on this principle is considered. An equivalent analytical formulation is shown to lead quite easily to a convergence result, and a third formulation is introduced which can be generalized for the systems of conservation laws. Various numerical schemes are constructed from the proposed time discretization. The first family of schemes is obtained by using a spatial grid and projecting the results of the time discretization. Many known schemes are then recognized (mainly schemes by Osher, Roe, and LeVeque). A second way to discretize leads to a particle scheme without space grid, which is very efficient (at least in the scalar case). Finally, a close relationship between the proposed method and the Boltzmann type schemes is established. 14 references
Direct determination approach for the multifractal detrending moving average analysis
Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing
2017-11-01
In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.
Suicide attempts, platelet monoamine oxidase and the average evoked response
International Nuclear Information System (INIS)
Buchsbaum, M.S.; Haier, R.J.; Murphy, D.L.
1977-01-01
The relationship between suicides and suicide attempts and two biological measures, platelet monoamine oxidase levels (MAO) and average evoked response (AER) augmenting was examined in 79 off-medication psychiatric patients and in 68 college student volunteers chosen from the upper and lower deciles of MAO activity levels. In the patient sample, male individuals with low MAO and AER augmenting, a pattern previously associated with bipolar affective disorders, showed a significantly increased incidence of suicide attempts in comparison with either non-augmenting low MAO or high MAO patients. Within the normal volunteer group, all male low MAO probands with a family history of suicide or suicide attempts were AER augmenters themselves. Four completed suicides were found among relatives of low MAO probands whereas no high MAO proband had a relative who committed suicide. These findings suggest that the combination of low platelet MAO activity and AER augmenting may be associated with a possible genetic vulnerability to psychiatric disorders. (author)
Quantum gravity unification via transfinite arithmetic and geometrical averaging
International Nuclear Information System (INIS)
El Naschie, M.S.
2008-01-01
In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of ε (∞) spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian ε (∞) manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68