Cosmological measure with volume averaging and the vacuum energy problem
Astashenok, Artyom V.; del Popolo, Antonino
2012-04-01
In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.
Cosmological measure with volume averaging and the vacuum energy problem
International Nuclear Information System (INIS)
Astashenok, Artyom V; Del Popolo, Antonino
2012-01-01
In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero. (paper)
Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.
Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D
2018-04-19
The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.
Medicare Part B Drug Average Sales Pricing Files
U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...
Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes
2014-08-01
The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.
Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging
International Nuclear Information System (INIS)
Soltanian-Zadeh, H.; Windham, J.P.; Yagle, A.E.
1993-01-01
Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images
Derivation of a volume-averaged neutron diffusion equation; Atomos para el desarrollo de Mexico
Energy Technology Data Exchange (ETDEWEB)
Vazquez R, R.; Espinosa P, G. [UAM-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Mexico D.F. 09340 (Mexico); Morales S, Jaime B. [UNAM, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, Paseo Cuauhnahuac 8532, Jiutepec, Morelos 62550 (Mexico)]. e-mail: rvr@xanum.uam.mx
2008-07-01
This paper presents a general theoretical analysis of the problem of neutron motion in a nuclear reactor, where large variations on neutron cross sections normally preclude the use of the classical neutron diffusion equation. A volume-averaged neutron diffusion equation is derived which includes correction terms to diffusion and nuclear reaction effects. A method is presented to determine closure-relationships for the volume-averaged neutron diffusion equation (e.g., effective neutron diffusivity). In order to describe the distribution of neutrons in a highly heterogeneous configuration, it was necessary to extend the classical neutron diffusion equation. Thus, the volume averaged diffusion equation include two corrections factor: the first correction is related with the absorption process of the neutron and the second correction is a contribution to the neutron diffusion, both parameters are related to neutron effects on the interface of a heterogeneous configuration. (Author)
Drug and Alcohol Studies (Volume 5: Interventions)
MacGregor, S; Thom, B
2014-01-01
VOLUME FIVE: INTERVENTIONS Natural Recovery from Alcohol Problems Harald Klingemann School-Based Programmes to Prevent Alcohol, Tobacco and Other Drug Use Gilbert Botvin and Kenneth Griffin Community Prevention of Alcohol Problems Harold Holder Can Screening and Brief Intervention Lead to Population-Level Reductions in Alcohol-Related Harm? Nick Heather Sharpening the Focus of Alcohol Policy from Aggregate Consumption to Harm and Risk Reduction Tim Stockwell et al A Review of the Efficacy and...
Modelling lidar volume-averaging and its significance to wind turbine wake measurements
Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.
2017-05-01
Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.
Studies concerning average volume flow and waterpacking anomalies in thermal-hydraulics codes
International Nuclear Information System (INIS)
Lyczkowski, R.W.; Ching, J.T.; Mecham, D.C.
1977-01-01
One-dimensional hydrodynamic codes have been observed to exhibit anomalous behavior in the form of non-physical pressure oscillations and spikes. It is our experience that sometimes this anomaloous behavior can result in mass depletion, steam table failure and in severe cases, problem abortion. In addition, these non-physical pressure spikes can result in long running times when small time steps are needed in an attempt to cope with anomalous solution behavior. The source of these pressure spikes has been conjectured to be caused by nonuniform enthalpy distribution or wave reflection off the closed end of a pipe or abrupt changes in pressure history when the fluid changes from subcooled to two-phase conditions. It is demonstrated in this paper that many of the faults can be attributed to inadequate modeling of the average volume flow and the sharp fluid density front crossing a junction. General corrective models are difficult to devise since the causes of the problems touch on the very theoretical bases of the differential field equations and associated solution scheme. For example, the fluid homogeneity assumption and the numerical extrapolation scheme have placed severe restrictions on the capability of a code to adequately model certain physical phenomena involving fluid discontinuities. The need for accurate junction and local properties to describe phenomena internal to a control volume often points to additional lengthy computations that are difficult to justify in terms of computational efficiency. Corrective models that are economical to implement and use are developed. When incorporated into the one-dimensional, homogeneous transient thermal-hydraulic analysis computer code, RELAP4, they help mitigate many of the code's difficulties related to average volume flow and water-packing anomalies. An average volume flow model and a critical density model are presented. Computational improvements due to these models are also demonstrated
Energy Technology Data Exchange (ETDEWEB)
Calloo, A.; Vidal, J.F.; Le Tellier, R.; Rimpault, G., E-mail: ansar.calloo@cea.fr, E-mail: jean-francois.vidal@cea.fr, E-mail: romain.le-tellier@cea.fr, E-mail: gerald.rimpault@cea.fr [CEA, DEN, DER/SPRC/LEPh, Saint-Paul-lez-Durance (France)
2011-07-01
This paper deals with the solving of the multigroup integro-differential form of the transport equation for fine energy group structure. In that case, multigroup transfer cross sections display strongly peaked shape for light scatterers and the current Legendre polynomial expansion is not well-suited to represent them. Furthermore, even if considering an exact scattering cross sections representation, the scattering source in the discrete ordinates method (also known as the Sn method) being calculated by sampling the angular flux at given directions, may be wrongly computed due to lack of angular support for the angular flux. Hence, following the work of Gerts and Matthews, an angular finite volume solver has been developed for 2D Cartesian geometries. It integrates the multigroup transport equation over discrete volume elements obtained by meshing the unit sphere with a product grid over the polar and azimuthal coordinates and by considering the integrated flux per solid angle element. The convergence of this method has been compared to the S{sub n} method for a highly anisotropic benchmark. Besides, piecewise-average scattering cross sections have been produced for non-bound Hydrogen atoms using a free gas model for thermal neutrons. LWR lattice calculations comparing Legendre representations of the Hydrogen scattering multigroup cross section at various orders and piecewise-average cross sections for this same atom are carried out (while keeping a Legendre representation for all other isotopes). (author)
International Nuclear Information System (INIS)
Calloo, A.; Vidal, J.F.; Le Tellier, R.; Rimpault, G.
2011-01-01
This paper deals with the solving of the multigroup integro-differential form of the transport equation for fine energy group structure. In that case, multigroup transfer cross sections display strongly peaked shape for light scatterers and the current Legendre polynomial expansion is not well-suited to represent them. Furthermore, even if considering an exact scattering cross sections representation, the scattering source in the discrete ordinates method (also known as the Sn method) being calculated by sampling the angular flux at given directions, may be wrongly computed due to lack of angular support for the angular flux. Hence, following the work of Gerts and Matthews, an angular finite volume solver has been developed for 2D Cartesian geometries. It integrates the multigroup transport equation over discrete volume elements obtained by meshing the unit sphere with a product grid over the polar and azimuthal coordinates and by considering the integrated flux per solid angle element. The convergence of this method has been compared to the S_n method for a highly anisotropic benchmark. Besides, piecewise-average scattering cross sections have been produced for non-bound Hydrogen atoms using a free gas model for thermal neutrons. LWR lattice calculations comparing Legendre representations of the Hydrogen scattering multigroup cross section at various orders and piecewise-average cross sections for this same atom are carried out (while keeping a Legendre representation for all other isotopes). (author)
Davit, Yohan
2013-12-01
A wide variety of techniques have been developed to homogenize transport equations in multiscale and multiphase systems. This has yielded a rich and diverse field, but has also resulted in the emergence of isolated scientific communities and disconnected bodies of literature. Here, our goal is to bridge the gap between formal multiscale asymptotics and the volume averaging theory. We illustrate the methodologies via a simple example application describing a parabolic transport problem and, in so doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points with great care, and reference many fundamental works. © 2013 Elsevier Ltd.
Measurement of average density and relative volumes in a dispersed two-phase fluid
Sreepada, Sastry R.; Rippel, Robert R.
1992-01-01
An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varing optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.
The effect of temperature on the average volume of Barkhausen jump on Q235 carbon steel
Guo, Lei; Shu, Di; Yin, Liang; Chen, Juan; Qi, Xin
2016-06-01
On the basis of the average volume of Barkhausen jump (AVBJ) vbar generated by irreversible displacement of magnetic domain wall under the effect of the incentive magnetic field on ferromagnetic materials, the functional relationship between saturation magnetization Ms and temperature T is employed in this paper to deduce the explicit mathematical expression among AVBJ vbar, stress σ, incentive magnetic field H and temperature T. Then the change law between AVBJ vbar and temperature T is researched according to the mathematical expression. Moreover, the tensile and compressive stress experiments are carried out on Q235 carbon steel specimens at different temperature to verify our theories. This paper offers a series of theoretical bases to solve the temperature compensation problem of Barkhausen testing method.
METHODOLOGY OF THE DRUGS MARKET VOLUME MODELING ON THE EXAMPLE OF HEMOPHILIA A
Directory of Open Access Journals (Sweden)
N. B. Molchanova
2015-01-01
Full Text Available Hemophilia A is a serious genetic disease, which may lead to disability of a patient even in early ages without a required therapy. The only one therapeutic approach is a replacement therapy with drugs of bloodcoagulation factor VIII (FVIII. The modeling of coagulation drugs market volume will allow evaluation of the level of patients’ provision with a necessary therapy. Modeling of a “perfect” market of drugs and its comparison with the real one was the purpose of the study. During the modeling of market volume we have used the data about the number of hamophilia A patients on the basis of the federal registry, Russian and international morbidity indices, and the data of a real practice about average consumption of drugs of bloodcoagulation factors and data about the drugs prescription according to the standards and protocols of assistance rendering. According to the standards of care delivery, average annual volume of FVIII drugs consumption amounted to 406 325 244 IU for children and 964 578 678 IU for adults, i.e. an average volume of a “perfect” market is equal to 1 370 903 922 IU for all patients. The market volume is 1.8 times bigger than a real volume of FVIII drugs which, according to the data of IMS marketing agency, amounted to 765 000 000 IU in 2013. The modeling conducted has shown that despite a relatively high patients’ coverage there is a potential for almost double growth.
International Nuclear Information System (INIS)
Espinosa-Paredes, Gilberto
2010-01-01
The aim of this paper is to propose a framework to obtain a new formulation for multiphase flow conservation equations without length-scale restrictions, based on the non-local form of the averaged volume conservation equations. The simplification of the local averaging volume of the conservation equations to obtain practical equations is subject to the following length-scale restrictions: d << l << L, where d is the characteristic length of the dispersed phases, l is the characteristic length of the averaging volume, and L is the characteristic length of the physical system. If the foregoing inequality does not hold, or if the scale of the problem of interest is of the order of l, the averaging technique and therefore, the macroscopic theories of multiphase flow should be modified in order to include appropriate considerations and terms in the corresponding equations. In these cases the local form of the averaged volume conservation equations are not appropriate to describe the multiphase system. As an example of the conservation equations without length-scale restrictions, the natural circulation boiling water reactor was consider to study the non-local effects on the thermal-hydraulic core performance during steady-state and transient behaviors, and the results were compared with the classic local averaging volume conservation equations.
Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.
Cambridge Conference on School Mathematics, Newton, MA.
Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…
International Nuclear Information System (INIS)
Whitcher, Ralph
2007-01-01
1 - Description of program or function: SACALC2B calculates the average solid angle subtended by a rectangular or circular detector window to a coaxial or non-coaxial rectangular, circular or point source, including where the source and detector planes are not parallel. SACALC C YL calculates the average solid angle subtended by a cylinder to a rectangular or circular source, plane or thick, at any location and orientation. This is needed, for example, in calculating the intrinsic gamma efficiency of a detector such as a GM tube. The program also calculates the number of hits on the cylinder side and on each end, and the average path length through the detector volume (assuming no scattering or absorption). Point sources can be modelled by using a circular source of zero radius. NEA-1688/03: Documentation has been updated (January 2006). 2 - Methods: The program uses a Monte Carlo method to calculate average solid angle for source-detector geometries that are difficult to analyse by analytical methods. The values of solid angle are calculated to accuracies of typically better than 0.1%. The calculated values from the Monte Carlo method agree closely with those produced by polygon approximation and numerical integration by Gardner and Verghese, and others. 3 - Restrictions on the complexity of the problem: The program models a circular or rectangular detector in planes that are not necessarily coaxial, nor parallel. Point sources can be modelled by using a circular source of zero radius. The sources are assumed to be uniformly distributed. NEA-1688/04: In SACALC C YL, to avoid rounding errors, differences less than 1 E-12 are assumed to be zero
He, Ning; Sun, Hechun; Dai, Miaomiao
2014-05-01
To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.
2010-11-15
..., Multiple Source Drug Definition, and Upper Limits for Multiple Source Drugs AGENCY: Centers for Medicare... withdrawing the definition of ``multiple source drug'' as it was revised in the ``Medicaid Program; Multiple Source Drug Definition'' final rule published in the October 7, 2008 Federal Register. DATES: Effective...
Directory of Open Access Journals (Sweden)
Chieh-Fan Chen
2011-01-01
Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Energy Technology Data Exchange (ETDEWEB)
Reimold, M.; Mueller-Schauenburg, W.; Dohmen, B.M.; Bares, R. [Department of Nuclear Medicine, University of Tuebingen, Otfried-Mueller-Strasse 14, 72076, Tuebingen (Germany); Becker, G.A. [Nuclear Medicine, University of Leipzig, Leipzig (Germany); Reischl, G. [Radiopharmacy, University of Tuebingen, Tuebingen (Germany)
2004-04-01
Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in ROI analysis), interindividual averaging may theoretically be beneficial, too. The aim of this study was to investigate the effect of such averaging on the binding potential (BP) calculated with Logan's non-invasive graphical analysis and the ''simplified reference tissue method'' (SRTM) proposed by Lammertsma and Hume, on the basis of simulated and measured positron emission tomography data [{sup 11}C]d-threo-methylphenidate (dMP) and [{sup 11}C]raclopride (RAC) PET. dMP was not quantified with SRTM since the low k {sub 2} (washout rate constant from the first tissue compartment) introduced a high noise sensitivity. Even for considerably different shapes of TAC (dMP PET in parkinsonian patients and healthy controls, [{sup 11}C]raclopride in patients with and without haloperidol medication) and a high variance in the rate constants (e.g. simulated standard deviation of K {sub 1}=25%), the BP obtained from average TAC was close to the mean BP (<5%). However, unfavourably distributed parameters, especially a correlated large variance in two or more parameters, may lead to larger errors. In Monte Carlo simulations, interindividual averaging before quantification reduced the variance from the SRTM (beyond a critical signal to noise ratio) and the bias in Logan's method. Interindividual averaging may further increase accuracy when there is an error term in the reference tissue assumption E=DV {sub 2}-DV ' (DV {sub 2} = distribution volume of the first tissue compartment, DV &apos
Henault, M.; Wattieaux, G.; Lecas, T.; Renouard, J. P.; Boufendi, L.
2016-02-01
Nanoparticles growing or injected in a low pressure cold plasma generated by a radiofrequency capacitively coupled capacitive discharge induce strong modifications in the electrical parameters of both plasma and discharge. In this paper, a non-intrusive method, based on the measurement of the plasma impedance, is used to determine the volume averaged electron density and effective coupled power to the plasma bulk. Good agreements are found when the results are compared to those given by other well-known and established methods.
The disk averaged star formation relation for Local Volume dwarf galaxies
López-Sánchez, Á. R.; Lagos, C. D. P.; Young, T.; Jerjen, H.
2018-05-01
Spatially resolved H I studies of dwarf galaxies have provided a wealth of precision data. However these high-quality, resolved observations are only possible for handful of dwarf galaxies in the Local Volume. Future H I surveys are unlikely to improve the current situation. We therefore explore a method for estimating the surface density of the atomic gas from global H I parameters, which are conversely widely available. We perform empirical tests using galaxies with resolved H I maps, and find that our approximation produces values for the surface density of atomic hydrogen within typically 0.5 dex of the true value. We apply this method to a sample of 147 galaxies drawn from modern near-infrared stellar photometric surveys. With this sample we confirm a strict correlation between the atomic gas surface density and the star formation rate surface density, that is vertically offset from the Kennicutt-Schmidt relation by a factor of 10 - 30, and significantly steeper than the classical N = 1.4 of Kennicutt (1998). We further infer the molecular fraction in the sample of low surface brightness, predominantly dwarf galaxies by assuming that the star formation relationship with molecular gas observed for spiral galaxies also holds in these galaxies, finding a molecular-to-atomic gas mass fraction within the range of 5-15%. Comparison of the data to available models shows that a model in which the thermal pressure balances the vertical gravitational field captures better the shape of the ΣSFR-Σgas relationship. However, such models fail to reproduce the data completely, suggesting that thermal pressure plays an important role in the disks of dwarf galaxies.
International Nuclear Information System (INIS)
Wang Dalun; Li Benci; Wang Xiuchun; Li Yijun; Zhang Shaohua; He Yongwu
1991-07-01
The average fission fraction of 238 U caused by 14 MeV neutrons in assemblies with large volume depleted uranium has been determined. The measured value of p f 238U (R ∞ depleted ) 14 was 0.897 ± 0.036. Measurements were also completed for neutron flux distribution and average fission fraction of 235 U isotope in depleted uranium sphere. Values of p f 238U (R depleted ) have been obtained by using a series of uranium spheres. For a sphere with Φ 600 the p f 23 '8 U (R 300 depleted ) is 0.823 ± 0.041, the density of depleted uranium assembly is 18.8g/cm 3 and total weight of assembly is about 2.8t
Directory of Open Access Journals (Sweden)
Björn eNitzsche
2015-06-01
Full Text Available Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams were acquired on a 1.5T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight, age and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM and white (WM matter as well as cerebrospinal fluid (CSF classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM. Overall, a positive correlation of GM volume and body weight explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.
Directory of Open Access Journals (Sweden)
Peihua Wang
Full Text Available After the implementation of the universal salt iodization (USI program in 1996, seven cross-sectional school-based surveys have been conducted to monitor iodine deficiency disorders (IDD among children in eastern China.This study aimed to examine the correlation of total goiter rate (TGR with average thyroid volume (Tvol and urinary iodine concentration (UIC in Jiangsu province after IDD elimination.Probability-proportional-to-size sampling was applied to select 1,200 children aged 8-10 years old in 30 clusters for each survey in 1995, 1997, 1999, 2001, 2002, 2005, 2009 and 2011. We measured Tvol using ultrasonography in 8,314 children and measured UIC (4,767 subjects and salt iodine (10,184 samples using methods recommended by the World Health Organization. Tvol was used to calculate TGR based on the reference criteria specified for sex and body surface area (BSA.TGR decreased from 55.2% in 1997 to 1.0% in 2009, and geometric means of Tvol decreased from 3.63 mL to 1.33 mL, along with the UIC increasing from 83 μg/L in 1995 to 407 μg/L in 1999, then decreasing to 243 μg/L in 2005, and then increasing to 345 μg/L in 2011. In the low goiter population (TGR 300 μg/L was associated with a smaller average Tvol in children.After IDD elimination in Jiangsu province in 2001, lower TGR was associated with smaller average Tvol. Average Tvol was more sensitive than TGR in detecting the fluctuation of UIC. A UIC of 300 μg/L may be defined as a critical value for population level iodine status monitoring.
International Nuclear Information System (INIS)
Hirata, Akimasa; Takano, Yukinori; Fujiwara, Osamu; Kamimura, Yoshitsugu
2010-01-01
The present study quantified the volume-averaged in situ electric field in nerve tissues of anatomically based numeric Japanese male and female models for exposure to extremely low-frequency electric and magnetic fields. A quasi-static finite-difference time-domain method was applied to analyze this problem. The motivation of our investigation is that the dependence of the electric field induced in nerve tissue on the averaging volume/distance is not clear, while a cubical volume of 5 x 5 x 5 mm 3 or a straight-line segment of 5 mm is suggested in some documents. The influence of non-nerve tissue surrounding nerve tissue is also discussed by considering three algorithms for calculating the averaged in situ electric field in nerve tissue. The computational results obtained herein reveal that the volume-averaged electric field in the nerve tissue decreases with the averaging volume. In addition, the 99th percentile value of the volume-averaged in situ electric field in nerve tissue is more stable than that of the maximal value for different averaging volume. When including non-nerve tissue surrounding nerve tissue in the averaging volume, the resultant in situ electric fields were not so dependent on the averaging volume as compared to the case excluding non-nerve tissue. In situ electric fields averaged over a distance of 5 mm were comparable or larger than that for a 5 x 5 x 5 mm 3 cube depending on the algorithm, nerve tissue considered and exposure scenarios. (note)
Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.
2012-04-01
We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell
2010-07-01
... volume fraction of HAP in the actual solvent loss? 63.2854 Section 63.2854 Protection of Environment... AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutants: Solvent... average volume fraction of HAP in the actual solvent loss? (a) This section describes the information and...
International Nuclear Information System (INIS)
Tuite, P.; Tuite, K.; O'Kelley, M.; Ely, P.
1991-08-01
This study provides a quantitative framework for bounding unpackaged greater-than-Class C low-level radioactive waste types as a function of concentration averaging. The study defines the three concentration averaging scenarios that lead to base, high, and low volumetric projections; identifies those waste types that could be greater-than-Class C under the high volume, or worst case, concentration averaging scenario; and quantifies the impact of these scenarios on identified waste types relative to the base case scenario. The base volume scenario was assumed to reflect current requirements at the disposal sites as well as the regulatory views. The high volume scenario was assumed to reflect the most conservative criteria as incorporated in some compact host state requirements. The low volume scenario was assumed to reflect the 10 CFR Part 61 criteria as applicable to both shallow land burial facilities and to practices that could be employed to reduce the generation of Class C waste types
Kemaneci, E.H.; Carbone, E.A.D.; Booth, J.P.; Graef, W.A.A.D.; Dijk, van J.; Kroesen, G.M.W.
An inductively coupled radio-frequency plasma in chlorine is investigated via a global (volume-averaged) model, both in continuous and square wave modulated power input modes. After the power is switched off (in a pulsed mode) an ion–ion plasma appears. In order to model this phenomenon, a novel
Robinson, J C; Luft, H S
1985-12-01
A variety of recent proposals rely heavily on market forces as a means of controlling hospital cost inflation. Sceptics argue, however, that increased competition might lead to cost-increasing acquisitions of specialized clinical services and other forms of non-price competition as means of attracting physicians and patients. Using data from hospitals in 1972 we analyzed the impact of market structure on average hospital costs, measured in terms of both cost per patient and cost per patient day. Under the retrospective reimbursement system in place at the time, hospitals in more competitive environments exhibited significantly higher costs of production than did those in less competitive environments.
Patil, Vishal; Liburdy, James
2012-11-01
Turbulent porous media flows are encountered in catalytic bed reactors and heat exchangers. Dispersion and mixing properties of these flows play an essential role in efficiency and performance. In an effort to understand these flows, pore scale time resolved PIV measurements in a refractive index matched porous bed were made. Pore Reynolds numbers, based on hydraulic diameter and pore average velocity, were varied from 400-4000. Jet-like flows and recirculation regions associated with large scale structures were found to exist. Coherent vortical structures which convect at approximately 0.8 times the pore average velocity were identified. These different flow regions exhibited different turbulent characteristics and hence contributed unequally to global transport properties of the bed. The heterogeneity present within a pore and also from pore to pore can be accounted for in estimating transport properties using the method of volume averaging. Eddy viscosity maps and mean velocity field maps, both obtained from PIV measurements, along with the method of volume averaging were used to predict the dispersion tensor versus Reynolds number. Asymptotic values of dispersion compare well to existing correlations. The role of molecular diffusion was explored by varying the Schmidt number and molecular diffusion was found to play an important role in tracer transport, especially in recirculation regions. Funding by NSF grant 0933857, Particulate and Multiphase Processing.
Directory of Open Access Journals (Sweden)
Okuda Miyuki
2012-09-01
Full Text Available Abstract Introduction We were able to treat a patient with acute exacerbation of chronic obstructive pulmonary disease who also suffered from sleep-disordered breathing by using the average volume-assured pressure support mode of a Respironics V60 Ventilator (Philips Respironics: United States. This allows a target tidal volume to be set based on automatic changes in inspiratory positive airway pressure. This removed the need to change the noninvasive positive pressure ventilation settings during the day and during sleep. The Respironics V60 Ventilator, in the average volume-assured pressure support mode, was attached to our patient and improved and stabilized his sleep-related hypoventilation by automatically adjusting force to within an acceptable range. Case presentation Our patient was a 74-year-old Japanese man who was hospitalized for treatment due to worsening of dyspnea and hypoxemia. He was diagnosed with acute exacerbation of chronic obstructive pulmonary disease and full-time biphasic positive airway pressure support ventilation was initiated. Our patient was temporarily provided with portable noninvasive positive pressure ventilation at night-time following an improvement in his condition, but his chronic obstructive pulmonary disease again worsened due to the recurrence of a respiratory infection. During the initial exacerbation, his tidal volume was significantly lower during sleep (378.9 ± 72.9mL than while awake (446.5 ± 63.3mL. A ventilator that allows ventilation to be maintained by automatically adjusting the inspiratory force to within an acceptable range was attached in average volume-assured pressure support mode, improving his sleep-related hypoventilation, which is often associated with the use of the Respironics V60 Ventilator. Polysomnography performed while our patient was on noninvasive positive pressure ventilation revealed obstructive sleep apnea syndrome (apnea-hypopnea index = 14, suggesting that his chronic
International Nuclear Information System (INIS)
Fugal, M; McDonald, D; Jacqmin, D; Koch, N; Ellis, A; Peng, J; Ashenafi, M; Vanek, K
2015-01-01
Purpose: This study explores novel methods to address two significant challenges affecting measurement of patient-specific quality assurance (QA) with IBA’s Matrixx Evolution™ ionization chamber array. First, dose calculation algorithms often struggle to accurately determine dose to the chamber array due to CT artifact and algorithm limitations. Second, finite chamber size and volume averaging effects cause additional deviation from the calculated dose. Methods: QA measurements were taken with the Matrixx positioned on the treatment table in a solid-water Multi-Cube™ phantom. To reduce the effect of CT artifact, the Matrixx CT image set was masked with appropriate materials and densities. Individual ionization chambers were masked as air, while the high-z electronic backplane and remaining solid-water material were masked as aluminum and water, respectively. Dose calculation was done using Varian’s Acuros XB™ (V11) algorithm, which is capable of predicting dose more accurately in non-biologic materials due to its consideration of each material’s atomic properties. Finally, the exported TPS dose was processed using an in-house algorithm (MATLAB) to assign the volume averaged TPS dose to each element of a corresponding 2-D matrix. This matrix was used for comparison with the measured dose. Square fields at regularly-spaced gantry angles, as well as selected patient plans were analyzed. Results: Analyzed plans showed improved agreement, with the average gamma passing rate increasing from 94 to 98%. Correction factors necessary for chamber angular dependence were reduced by 67% compared to factors measured previously, indicating that previously measured factors corrected for dose calculation errors in addition to true chamber angular dependence. Conclusion: By comparing volume averaged dose, calculated with a capable dose engine, on a phantom masked with correct materials and densities, QA results obtained with the Matrixx Evolution™ can be significantly
DEFF Research Database (Denmark)
Faralli, Adele; Melander, Fredrik; Larsen, Esben Kjær Unmack
2014-01-01
Polyethylene glycol (PEG)-based hydrogels are widely used for biomedical applications, including matrices for controlled drug release. We present a method for defining drug dosing in screening assays by light-activated cross-linking of PEG-diacrylate hydrogels with embedded drug-loaded liposome...
Ranamukhaarachchi, Sahan A.; Padeste, Celestino; Dübner, Matthias; Häfeli, Urs O.; Stoeber, Boris; Cadarso, Victor J.
2016-07-01
Therapeutic drug monitoring (TDM) typically requires painful blood drawn from patients. We propose a painless and minimally-invasive alternative for TDM using hollow microneedles suitable to extract extremely small volumes (microneedle is functionalized to be used as a micro-reactor during sample collection to trap and bind target drug candidates during extraction, without requirements of sample transfer. An optofluidic device is integrated with this microneedle to rapidly quantify drug analytes with high sensitivity using a straightforward absorbance scheme. Vancomycin is currently detected by using volumes ranging between 50-100 μL with a limit of detection (LoD) of 1.35 μM. The proposed microneedle-optofluidic biosensor can detect vancomycin with a sample volume of 0.6 nL and a LoD of <100 nM, validating this painless point of care system with significant potential to reduce healthcare costs and patients suffering.
Drug research methodology. Volume 2, The identification of drugs of interest in highway safety
1980-03-01
This report presents findings of a workshop on the identification of drugs that should be the focus of near-term highway safety research. Drugs of interest are those that have a potential to increase the likelihood of traffic crashes and their attend...
Yaromina, Ala; Granzier, Marlies; Biemans, Rianne; Lieuwes, Natasja; van Elmpt, Wouter; Shakirin, Georgy; Dubois, Ludwig; Lambin, Philippe
2017-09-01
We tested a novel treatment approach combining (1) targeting radioresistant hypoxic tumour cells with the hypoxia-activated prodrug TH-302 and (2) inverse radiation dose-painting to boost selectively non-hypoxic tumour sub-volumes having no/low drug uptake. 18 F-HX4 hypoxia tracer uptake measured with a clinical PET/CT scanner was used as a surrogate of TH-302 activity in rhabdomyosarcomas growing in immunocompetent rats. Low or high drug uptake volume (LDUV/HDUV) was defined as 40% of the GTV with the lowest or highest 18 F-HX4 uptake, respectively. Two hours post TH-302/saline administration, animals received either single dose radiotherapy (RT) uniformly (15 or 18.5Gy) or a dose-painted non-uniform radiation (15Gy) with 50% higher dose to LDUV or HDUV (18.5Gy). Treatment plans were created using Eclipse treatment planning system and radiation was delivered using VMAT. Tumour response was quantified as time to reach 3 times starting tumour volume. Non-uniform RT boosting tumour sub-volume with low TH-302 uptake (LDUV) was superior to the same dose escalation to HDUV (pvolume with no/low activity of hypoxia-activated prodrugs. This strategy applies on average a lower radiation dose and is as effective as uniform dose escalation to the entire tumour. It could be applied to other type of drugs provided that their distribution can be imaged. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Haufe, Stefan; Huang, Yu; Parra, Lucas C
2015-08-01
In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.
METHODOLOGY OF THE DRUGS MARKET VOLUME MODELING ON THE EXAMPLE OF HEMOPHILIA A
N. B. Molchanova
2015-01-01
Hemophilia A is a serious genetic disease, which may lead to disability of a patient even in early ages without a required therapy. The only one therapeutic approach is a replacement therapy with drugs of bloodcoagulation factor VIII (FVIII). The modeling of coagulation drugs market volume will allow evaluation of the level of patients’ provision with a necessary therapy. Modeling of a “perfect” market of drugs and its comparison with the real one was the purpose of the study. During the mode...
Directory of Open Access Journals (Sweden)
Yao-Ching Wang
Full Text Available Respiratory motion causes uncertainties in tumor edges on either computed tomography (CT or positron emission tomography (PET images and causes misalignment when registering PET and CT images. This phenomenon may cause radiation oncologists to delineate tumor volume inaccurately in radiotherapy treatment planning. The purpose of this study was to analyze radiology applications using interpolated average CT (IACT as attenuation correction (AC to diminish the occurrence of this scenario. Thirteen non-small cell lung cancer patients were recruited for the present comparison study. Each patient had full-inspiration, full-expiration CT images and free breathing PET images by an integrated PET/CT scan. IACT for AC in PET(IACT was used to reduce the PET/CT misalignment. The standardized uptake value (SUV correction with a low radiation dose was applied, and its tumor volume delineation was compared to those from HCT/PET(HCT. The misalignment between the PET(IACT and IACT was reduced when compared to the difference between PET(HCT and HCT. The range of tumor motion was from 4 to 17 mm in the patient cohort. For HCT and PET(HCT, correction was from 72% to 91%, while for IACT and PET(IACT, correction was from 73% to 93% (*p<0.0001. The maximum and minimum differences in SUVmax were 0.18% and 27.27% for PET(HCT and PET(IACT, respectively. The largest percentage differences in the tumor volumes between HCT/PET and IACT/PET were observed in tumors located in the lowest lobe of the lung. Internal tumor volume defined by functional information using IACT/PET(IACT fusion images for lung cancer would reduce the inaccuracy of tumor delineation in radiation therapy planning.
Methodology to Forecast Volume and Cost of Cancer Drugs in Low- and Middle-Income Countries
Directory of Open Access Journals (Sweden)
Yehoda M. Martei
2018-02-01
Full Text Available Purpose: In low- and middle-income countries (LMICs, frequent outages of the stock of cancer drugs undermine cancer care delivery and are potentially fatal for patients with cancer. The aim of this study is to describe a methodologic approach to forecast chemotherapy volume and estimate cost that can be readily updated and applied in most LMICs. Methods: Prerequisite data for forecasting are population-based incidence data and cost estimates per unit of drug to be ordered. We used the supplementary guidelines from the WHO list of essential medicines for cancer to predict treatment plans and ordering patterns. We used de-identified aggregate data from the Botswana National Cancer Registry to estimate incident cases. The WHO Management Sciences for Health International Price Indicator was used to estimate unit costs per drug. Results: Chemotherapy volume required for incident cancer cases was estimated as the product of the standardized dose required to complete a full treatment regimen per patient, with a given cancer diagnosis and stage, multiplied by the total number of incident cancer cases with the respective diagnosis. The estimated chemotherapy costs to treat the 10 most common cancers in the public health care sector of Botswana is approximately 2.3 million US dollars. An estimated 66% of the budget is allocated to costs of rituximab and trastuzumab alone, which are used by approximately 10% of the cancer population. Conclusion: This method provides a reproducible approach to forecast chemotherapy volume and cost in LMICs. The chemotherapy volume and cost outputs of this methodology provide key stakeholders with valuable information that can guide budget estimation, resource allocation, and drug-price negotiations for cancer treatment. Ultimately, this will minimize drug shortages or outages and reduce potential loss of lives that result from an erratic drug supply.
International Nuclear Information System (INIS)
Kemaneci, Efe; Graef, Wouter; Rahimi, Sara; Van Dijk, Jan; Kroesen, Gerrit; Carbone, Emile; Jimenez-Diaz, Manuel
2015-01-01
A microwave-induced oxygen plasma is simulated using both stationary and time-resolved modelling strategies. The stationary model is spatially resolved and it is self-consistently coupled to the microwaves (Jimenez-Diaz et al 2012 J. Phys. D: Appl. Phys. 45 335204), whereas the time-resolved description is based on a global (volume-averaged) model (Kemaneci et al 2014 Plasma Sources Sci. Technol. 23 045002). We observe agreement of the global model data with several published measurements of microwave-induced oxygen plasmas in both continuous and modulated power inputs. Properties of the microwave plasma reactor are investigated and corresponding simulation data based on two distinct models shows agreement on the common parameters. The role of the square wave modulated power input is also investigated within the time-resolved description. (paper)
International Nuclear Information System (INIS)
Zhao, W.H.; Cox, S.F.J.
1980-07-01
In the NMR measurement of dynamic nuclear polarization, a volume average is obtained where the contribution from different parts of the sample is weighted according to the local intensity of the RF field component perpendicular to the large static field. A method of mapping this quantity is described. A small metallic object whose geometry is chosen to perturb the appropriate RF component is scanned through the region to be occupied by the sample. The response of the phase angle of the impedance of a tuned circuit comprising the NMR coil gives a direct measurement of the local weighting factor. The correlation between theory and experiment was obtained by using a circular coil. The measuring method, checked in this way, was then used to investigate the field profiles of practical coils which are required to be rectangular for a proposed experimental neutron polarizing filter. This method can be used to evaluate other practical RF coils. (author)
International Nuclear Information System (INIS)
Chi, Pai-Chun Melinda; Mawlawi, Osama; Luo Dershan; Liao Zhongxing; Macapinlac, Homer A.; Pan Tinsu
2008-01-01
Purpose: Patient respiratory motion can cause image artifacts in positron emission tomography (PET) from PET/computed tomography (CT) and change the quantification of PET for thoracic patients. In this study, respiration-averaged CT (ACT) was used to remove the artifacts, and the changes in standardized uptake value (SUV) and gross tumor volume (GTV) were quantified. Methods and Materials: We incorporated the ACT acquisition in a PET/CT session for 216 lung patients, generating two PET/CT data sets for each patient. The first data set (PET HCT /HCT) contained the clinical PET/CT in which PET was attenuation corrected with a helical CT (HCT). The second data set (PET ACT /ACT) contained the PET/CT in which PET was corrected with ACT. We quantified the differences between the two datasets in image alignment, maximum SUV (SUV max ), and GTV contours. Results: Of the patients, 68% demonstrated respiratory artifacts in the PET HCT , and for all patients the artifact was removed or reduced in the corresponding PET ACT . The impact of respiration artifact was the worst for lesions less than 50 cm 3 and located below the dome of the diaphragm. For lesions in this group, the mean SUV max difference, GTV volume change, shift in GTV centroid location, and concordance index were 21%, 154%, 2.4 mm, and 0.61, respectively. Conclusion: This study benchmarked the differences between the PET data with and without artifacts. It is important to pay attention to the potential existence of these artifacts during GTV contouring, as such artifacts may increase the uncertainties in the lesion volume and the centroid location
International Nuclear Information System (INIS)
Nakamura, Mitsuhiro; Miyabe, Yuki; Matsuo, Yukinori; Kamomae, Takeshi; Nakata, Manabu; Yano, Shinsuke; Sawada, Akira; Mizowaki, Takashi; Hiraoka, Masahiro
2012-01-01
The purpose of this study was to experimentally assess the validity of heterogeneity-corrected dose-volume prescription on respiratory-averaged computed tomography (RACT) images in stereotactic body radiotherapy (SBRT) for moving tumors. Four-dimensional computed tomography (CT) data were acquired while a dynamic anthropomorphic thorax phantom with a solitary target moved. Motion pattern was based on cos (t) with a constant respiration period of 4.0 sec along the longitudinal axis of the CT couch. The extent of motion (A 1 ) was set in the range of 0.0–12.0 mm at 3.0-mm intervals. Treatment planning with the heterogeneity-corrected dose-volume prescription was designed on RACT images. A new commercially available Monte Carlo algorithm of well-commissioned 6-MV photon beam was used for dose calculation. Dosimetric effects of intrafractional tumor motion were then investigated experimentally under the same conditions as 4D CT simulation using the dynamic anthropomorphic thorax phantom, films, and an ionization chamber. The passing rate of γ index was 98.18%, with the criteria of 3 mm/3%. The dose error between the planned and the measured isocenter dose in moving condition was within ± 0.7%. From the dose area histograms on the film, the mean ± standard deviation of the dose covering 100% of the cross section of the target was 102.32 ± 1.20% (range, 100.59–103.49%). By contrast, the irradiated areas receiving more than 95% dose for A 1 = 12 mm were 1.46 and 1.33 times larger than those for A 1 = 0 mm in the coronal and sagittal planes, respectively. This phantom study demonstrated that the cross section of the target received 100% dose under moving conditions in both the coronal and sagittal planes, suggesting that the heterogeneity-corrected dose-volume prescription on RACT images is acceptable in SBRT for moving tumors.
Comparing Generic Drug Markets in Europe and the United States: Prices, Volumes, and Spending.
Wouters, Olivier J; Kanavos, Panos G; McKEE, Martin
2017-09-01
Policy Points: Our study indicates that there are opportunities for cost savings in generic drug markets in Europe and the United States. Regulators should make it easier for generic drugs to reach the market. Regulators and payers should apply measures to stimulate price competition among generic drugmakers and to increase generic drug use. To meaningfully evaluate policy options, it is important to analyze historical context and understand why similar initiatives failed previously. Rising drug prices are putting pressure on health care budgets. Policymakers are assessing how they can save money through generic drugs. We compared generic drug prices and market shares in 13 European countries, using data from 2013, to assess the amount of variation that exists between countries. To place these results in context, we reviewed evidence from recent studies on the prices and use of generics in Europe and the United States. We also surveyed peer-reviewed studies, gray literature, and books published since 2000 to (1) outline existing generic drug policies in European countries and the United States; (2) identify ways to increase generic drug use and to promote price competition among generic drug companies; and (3) explore barriers to implementing reform of generic drug policies, using a historical example from the United States as a case study. The prices and market shares of generics vary widely across Europe. For example, prices charged by manufacturers in Switzerland are, on average, more than 2.5 times those in Germany and more than 6 times those in the United Kingdom, based on the results of a commonly used price index. The proportion of prescriptions filled with generics ranges from 17% in Switzerland to 83% in the United Kingdom. By comparison, the United States has historically had low generic drug prices and high rates of generic drug use (84% in 2013), but has in recent years experienced sharp price increases for some off-patent products. There are policy
Energy Technology Data Exchange (ETDEWEB)
Alexoff, David L., E-mail: alexoff@bnl.gov; Dewey, Stephen L.; Vaska, Paul; Krishnamoorthy, Srilalan; Ferrieri, Richard; Schueller, Michael; Schlyer, David J.; Fowler, Joanna S.
2011-02-15
Introduction: PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Methods: Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Results: Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean{+-}S.D.) escaping the leaf parenchyma were measured to be 59{+-}1.1%, 64{+-}4.4% and 67{+-}1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. Conclusions: The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.
Regulatory volume decrease in Leishmania mexicana: effect of anti-microtubule drugs
Directory of Open Access Journals (Sweden)
Francehuli Dagger
2013-02-01
Full Text Available The trypanosomatid cytoskeleton is responsible for the parasite's shape and it is modulated throughout the different stages of the parasite's life cycle. When parasites are exposed to media with reduced osmolarity, they initially swell, but subsequently undergo compensatory shrinking referred to as regulatory volume decrease (RVD. We studied the effects of anti-microtubule (Mt drugs on the proliferation of Leishmania mexicana promastigotes and their capacity to undergo RVD. All of the drugs tested exerted antiproliferative effects of varying magnitudes [ansamitocin P3 (AP3> trifluoperazine > taxol > rhizoxin > chlorpromazine]. No direct relationship was found between antiproliferative drug treatment and RVD. Similarly, Mt stability was not affected by drug treatment. Ansamitocin P3, which is effective at nanomolar concentrations, blocked amastigote-promastigote differentiation and was the only drug that impeded RVD, as measured by light dispersion. AP3 induced 2 kinetoplasts (Kt 1 nucleus cells that had numerous flagella-associated Kts throughout the cell. These results suggest that the dramatic morphological changes induced by AP3 alter the spatial organisation and directionality of the Mts that are necessary for the parasite's hypotonic stress-induced shape change, as well as its recovery.
International Nuclear Information System (INIS)
Barraclough, B; Li, J; Liu, C; Yan, G
2014-01-01
Purpose: Fourier-based deconvolution approaches used to eliminate ion chamber volume averaging effect (VAE) suffer from measurement noise. This work aims to investigate a novel method to account for ion chamber VAE through convolution in a commercial treatment planning system (TPS). Methods: Beam profiles of various field sizes and depths of an Elekta Synergy were collected with a finite size ion chamber (CC13) to derive a clinically acceptable beam model for a commercial TPS (Pinnacle 3 ), following the vendor-recommended modeling process. The TPS-calculated profiles were then externally convolved with a Gaussian function representing the chamber (σ = chamber radius). The agreement between the convolved profiles and measured profiles was evaluated with a one dimensional Gamma analysis (1%/1mm) as an objective function for optimization. TPS beam model parameters for focal and extra-focal sources were optimized and loaded back into the TPS for new calculation. This process was repeated until the objective function converged using a Simplex optimization method. Planar dose of 30 IMRT beams were calculated with both the clinical and the re-optimized beam models and compared with MapCHEC™ measurements to evaluate the new beam model. Results: After re-optimization, the two orthogonal source sizes for the focal source reduced from 0.20/0.16 cm to 0.01/0.01 cm, which were the minimal allowed values in Pinnacle. No significant change in the parameters for the extra-focal source was observed. With the re-optimized beam model, average Gamma passing rate for the 30 IMRT beams increased from 92.1% to 99.5% with a 3%/3mm criterion and from 82.6% to 97.2% with a 2%/2mm criterion. Conclusion: We proposed a novel method to account for ion chamber VAE in a commercial TPS through convolution. The reoptimized beam model, with VAE accounted for through a reliable and easy-to-implement convolution and optimization approach, outperforms the original beam model in standard IMRT QA
Energy Technology Data Exchange (ETDEWEB)
Barraclough, Brendan; Lebron, Sharon [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 and J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States); Li, Jonathan G.; Fan, Qiyong; Liu, Chihray; Yan, Guanghua, E-mail: yangua@shands.ufl.edu [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 (United States)
2016-05-15
Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to
Muenster, Uwe; Mueck, Wolfgang; van der Mey, Dorina; Schlemmer, Karl-Heinz; Greschat-Schade, Susanne; Haerter, Michael; Pelzetter, Christian; Pruemper, Christian; Verlage, Joerg; Göller, Andreas H; Ohm, Andreas
2016-05-01
The purpose of the study was to experimentally deduce pH-dependent critical volumes to dissolve applied dose (VDAD) that determine whether a drug candidate can be developed as immediate release (IR) tablet containing crystalline API, or if solubilization technology is needed to allow for sufficient oral bioavailability. pH-dependent VDADs of 22 and 83 compounds were plotted vs. the relative oral bioavailability (AUC solid vs. AUC solution formulation, Frel) in humans and rats, respectively. Furthermore, in order to investigate to what extent Frel rat may predict issues with solubility limited absorption in human, Frel rat was plotted vs. Frel human. Additionally, the impact of bile salts and lecithin on in vitro dissolution of poorly soluble compounds was tested and data compared to Frel rat and human. Respective in vitro - in vivo and in vivo - in vivo correlations were generated and used to build developability criteria. As a result, based on pH-dependent VDAD, Frel rat and in vitro dissolution in simulated intestinal fluid the IR formulation strategy within Pharmaceutical Research and Development organizations can be already set at late stage of drug discovery. Copyright © 2016 Elsevier B.V. All rights reserved.
A Critical Review of the Drug/Performance Literature. Volume II.
1979-12-01
Cannabis Drug Tolerance Drugs and Stress Amphetamine Depressants Drug Users Ethanol Behavior Drug Abuse Drug Withdrawal Hallucinogens Cannabinoids Drug...Therapeutics Clin Tox Clinical Toxicology Couw Psychopharm Communications in Psychopharmacology Comp Psychiat Comprehensive Psychiatry Current Ther Res Current...European Journal of Toxicology Exp Neurol Experimental Neurology EEG Clin Neurophys EEG Clinical Neurophysiology EEG J EEG Journal Ger Med German Medicine
DEFF Research Database (Denmark)
Hoffmann, Else Kay; Sørensen, Belinda Halling; Sauter, Daniel Rafael Peter
2015-01-01
to be an essential component of both VRAC and VSOAC. Reduced VRAC and VSOAC activities are seen in drug resistant cancer cells. ANO1 is a calcium-activated chloride channel expressed on the plasma membrane of e.g. secretory epithelia. ANO1 is amplified and highly expressed in a large number of carcinomas. The gene...... functions as well as their role in cancer and drug resistance....
1980-03-01
This report presents the findings of a workshop on the chemical analysis of human body fluids for drugs of interest in highway safety. A cross-disciplinary panel of experts reviewed the list of drugs of interest developed in a previous workshop and d...
Liu, Q.; Lange, R.
2003-12-01
Ferric iron is an important component in magmatic liquids, especially in those formed at subduction zones. Although it has long been known that Fe3+ occurs in four-, five- and six-fold coordination in crystalline compounds, only recently have all three Fe3+ coordination sites been confirmed in silicate glasses utilizing XANES spectroscopy at the Fe K-edge (Farges et al., 2003). Because the density of a magmatic liquid is largely determined by the geometrical packing of its network-forming cations (e.g., Si4+, Al3+, Ti4+, and Fe3+), the capacity of Fe3+ to undergo composition-induced coordination change affects the partial molar volume of the Fe2O3 component, which must be known to calculate how the ferric-ferrous ratio in magmatic liquids changes with pressure. Previous work has shown that the partial molar volume of Fe2O3 (VFe2O3) varies between calcic vs. sodic silicate melts (Mo et al., 1982; Dingwell and Brearley, 1988; Dingwell et al., 1988). The purpose of this study is to extend the data set in order to search for systematic variations in VFe2O3 with melt composition. High temperature (867-1534° C) density measurements were performed on eleven liquids in the Na2O-Fe2O3-FeO-SiO2 (NFS) system and five liquids in the K2O-Fe2O3-FeO-SiO2 (KFS) system using Pt double-bob Archimedean method. The ferric-ferrous ratio in the sodic and potassic liquids at each temperature of density measurement were calculated from the experimentally calibrated models of Lange and Carmichael (1989) and Tangeman et al. (2001) respectively. Compositions range (in mol%) from 4-18 Fe2O3, 0-3 FeO, 12-39 Na2O, 25-37 K2O, and 43-78 SiO2. Our density data are consistent with those of Dingwell et al. (1988) on similar sodic liquids. Our results indicate that for all five KFS liquids and for eight of eleven NFS liquids, the partial molar volume of the Fe2O3 component is a constant (41.57 ñ 0.14 cm3/mol) and exhibits zero thermal expansivity (similar to that for the SiO2 component). This value
Institute of Scientific and Technical Information of China (English)
谢佑卿
2011-01-01
在系统合金科学框架中建立有关无序合金的平均摩尔性质(体积和势能)的函数.通过对这些函数进行推导,可以得到平均摩尔体积函数、偏摩尔体积函数及派生出与成分相关的函数.在组元的偏摩尔性质和平均摩尔性质之间的普适方程、差分方程、在偏摩尔性质和平均摩尔性质之间不同参数的约束方程和普适的Gibbs-Duhem公式.可以证明从合金平均摩尔性质的不同函数计算的偏摩尔性质是相等的,但总体来说偏摩尔性质不等于给定组元的平均摩尔性质,即偏摩尔性质不能代表相应组元的摩尔性质.通过计算Au-Ni系中组元的偏摩尔体积和平均原子体积以及合金的平均原子体积,证明所建立的公式和函数的正确性.%In the framework of systematic science of alloys,the average molar property (volume and potential energy) functions of disordered alloys were established.From these functions,the average molar property functions,partial molar property functions,derivative functions with respect to composition,general equation of relationship between partial and average molar properties of components,difference equation and constraining equation of different values between partial and average molar properties,as well as general Gibbs-Duhem formula were derived.It was proved that the partial molar properties calculated from various combinative functions of average molar properties of alloys are equal,but in general,the partial molar properties are not equal to the average molar properties of a given component.This means that the partial molar properties cannot represent the corresponding properties of the component.All the equations and functions established in this work were proved to be correct by calculating the results of partial and average atomic volumes of components as well as average atomic volumes of alloys in the Au-Ni system.
Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team
2017-11-01
We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).
1980-06-01
This report presents the findings of a workshop on experimental research in the area of drugs and highway safety. Complementing studies of drug use in different driving populations, experimentation here refers to studies performed under controlled co...
1980-06-01
This report presents the findings of a workshop on epidemiology in drugs and highway safety. A cross-disciplinary panel of experts (1) identified methodological issues and constraints present in research to define the nature and magnitude of the drug...
Partial molar volumes of some drug and pro-drug substances in 1-octanol at T = 298.15 K
International Nuclear Information System (INIS)
Manin, Alex N.; Shmukler, Liudmila E.; Safonova, Liubov P.; Perlovich, German L.
2010-01-01
The article deals with measuring the densities of phenol, acetanilide, benzamide, benzoic acid, phenacetin, i-(acetylamino)-benzoic acid, i-hydroxy-benzamide, and i-acetaminophen (where i = 1, 2, 3) in 1-octanol in the wide concentration interval at T = 298.15 K. It also concerns the evaluation of apparent molar volumes and partial molar volumes at infinite dilution, V 2 0 -bar as well as comparative analysis of the free volumes per molecule in the octanolic solutions, V 2 free , and in the crystal lattices, V 2 free (cr), from the nature and position of the substitutes. Also described is the evaluation of the increments of V 2 0 -bar andV 2 free for the unsubstituted molecules and isomers and the methods to obtain partial molar volumes for various functional groups at infinite dilution in 1-octanol at T = 298.15 K. Also considered is the limiting partial molar volume of the solutes in terms of the scaled particle theory.
Partial molar volumes of some drug and pro-drug substances in 1-octanol at T = 298.15 K
Energy Technology Data Exchange (ETDEWEB)
Manin, Alex N.; Shmukler, Liudmila E.; Safonova, Liubov P. [Institute of Solution Chemistry, Russian Academy of Sciences, 153045 Ivanovo (Russian Federation); Perlovich, German L., E-mail: glp@isc-ras.r [Institute of Solution Chemistry, Russian Academy of Sciences, 153045 Ivanovo (Russian Federation)
2010-03-15
The article deals with measuring the densities of phenol, acetanilide, benzamide, benzoic acid, phenacetin, i-(acetylamino)-benzoic acid, i-hydroxy-benzamide, and i-acetaminophen (where i = 1, 2, 3) in 1-octanol in the wide concentration interval at T = 298.15 K. It also concerns the evaluation of apparent molar volumes and partial molar volumes at infinite dilution, V{sub 2}{sup 0}-bar as well as comparative analysis of the free volumes per molecule in the octanolic solutions, V{sub 2}{sup free}, and in the crystal lattices, V{sub 2}{sup free} (cr), from the nature and position of the substitutes. Also described is the evaluation of the increments of V{sub 2}{sup 0}-bar andV{sub 2}{sup free} for the unsubstituted molecules and isomers and the methods to obtain partial molar volumes for various functional groups at infinite dilution in 1-octanol at T = 298.15 K. Also considered is the limiting partial molar volume of the solutes in terms of the scaled particle theory.
DemQSAR: predicting human volume of distribution and clearance of drugs.
Demir-Kavuk, Ozgur; Bentzien, Jörg; Muegge, Ingo; Knapp, Ernst-Walter
2011-12-01
In silico methods characterizing molecular compounds with respect to pharmacologically relevant properties can accelerate the identification of new drugs and reduce their development costs. Quantitative structure-activity/-property relationship (QSAR/QSPR) correlate structure and physico-chemical properties of molecular compounds with a specific functional activity/property under study. Typically a large number of molecular features are generated for the compounds. In many cases the number of generated features exceeds the number of molecular compounds with known property values that are available for learning. Machine learning methods tend to overfit the training data in such situations, i.e. the method adjusts to very specific features of the training data, which are not characteristic for the considered property. This problem can be alleviated by diminishing the influence of unimportant, redundant or even misleading features. A better strategy is to eliminate such features completely. Ideally, a molecular property can be described by a small number of features that are chemically interpretable. The purpose of the present contribution is to provide a predictive modeling approach, which combines feature generation, feature selection, model building and control of overtraining into a single application called DemQSAR. DemQSAR is used to predict human volume of distribution (VD(ss)) and human clearance (CL). To control overtraining, quadratic and linear regularization terms were employed. A recursive feature selection approach is used to reduce the number of descriptors. The prediction performance is as good as the best predictions reported in the recent literature. The example presented here demonstrates that DemQSAR can generate a model that uses very few features while maintaining high predictive power. A standalone DemQSAR Java application for model building of any user defined property as well as a web interface for the prediction of human VD(ss) and CL is
Caltrans Average Annual Daily Traffic Volumes (2004)
California Environmental Health Tracking Program — [ from http://www.ehib.org/cma/topic.jsp?topic_key=79 ] Traffic exhaust pollutants include compounds such as carbon monoxide, nitrogen oxides, particulates (fine...
Berezhkovskiy, Leonid M
2013-02-01
The steady state, V(ss), terminal volume of distribution, V(β), and the terminal half-life, t(1/2), are commonly obtained from the drug plasma concentration-time profile, C(p)(t), following intravenous dosing. Unlike V(ss) that can be calculated based on the physicochemical properties of drugs considering the equilibrium partitioning between plasma and organ tissues, t(1/2) and V(β) cannot be calculated that way because they depend on the rates of drug transfer between blood and tissues. Considering the physiological pharmacokinetic model pertinent to the terminal phase of drug elimination, a novel equation that calculates t(1/2) (and consequently V(β)) was derived. It turns out that V(ss), the total body clearance, Cl, equilibrium blood-plasma concentration ratio, r; and the physiological parameters of the body such as cardiac output, and blood and tissue volumes are sufficient for determination of terminal kinetics. Calculation of t(1/2) by the obtained equation appears to be in good agreement with the experimentally observed vales of this parameter in pharmacokinetic studies in rat, monkey, dog, and human. The equation for the determination of the pre-exponent of the terminal phase of C(p)(t) is also found. The obtained equation allows to predict t(1/2) in human assuming that V(ss) and Cl were either obtained by allometric scaling or, respectively, calculated in silico or based on in vitro drug stability measurements. For compounds that have high clearance, the derived equation may be applied to calculate r just using the routine data on Cl, V(ss), and t(1/2), rather than doing the in vitro assay to measure this parameter. Copyright © 2012 Wiley Periodicals, Inc.
Śliwczyński, Andrzej; Brzozowska, Melania; Jacyna, Andrzej; Iltchev, Petre; Iwańczuk, Tymoteusz; Wierzba, Waldemar; Marczak, Michał; Orlewska, Katarzyna; Szymański, Piotr; Orlewska, Ewa
2017-01-01
to investigate the drug-class-specific changes in the volume and cost of antidiabetic medications in Poland in 2012-2015. This retrospective analysis was conducted based on the National Health Fund database covering an entire Polish population. The volume of antidiabetic medications is reported according to ATC/DDD methodology, costs-in current international dollars, based on purchasing power parity. During a 4-year observational period the number of patients, consumption of antidiabetic drugs and costs increased by 17%, 21% and 20%, respectively. Biguanides are the basic diabetes medication with a 39% market share. The insulin market is still dominated by human insulins, new antidiabetics (incretins, thiazolidinediones) are practically absent. Insulins had the largest share in diabetes medications expenditures (67% in 2015). The increase in antidiabetic medications costs over the analysed period of time was mainly caused by the increased use of insulin analogues. The observed tendencies correspond to the evidence-based HTA recommendations. The reimbursement status, the ratio of cost to clinical outcomes and data on the long-term safety have a deciding impact on how a drug is used.
Lagrangian averaging with geodesic mean.
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Alcohol, other Drugs, and Obesity: Plan-Of-The-Day-Notes, Volume 2
1993-08-11
crashes, drowning, etc., can result from memory loss, distorted sight perception, poor judgment or loss of coordination. Also, the painkilling effects may...in the presentation of alcohol and other drug abuse awareness education. An effective information program is essential to all prevention efforts. One...filling!" than for his efforts on the field and decided to quit the Miller Lite team. "I don’t like the effect I was having on a lot of little people
International Nuclear Information System (INIS)
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
Averaging in spherically symmetric cosmology
International Nuclear Information System (INIS)
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 3. Drug Metabolism: A Fascinating Link Between Chemistry and Biology. Nikhil Taxak Prasad V Bharatam. General Article Volume 19 Issue 3 March 2014 pp 259-282 ...
International Nuclear Information System (INIS)
Ichiguchi, Katsuji
1998-01-01
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...
Directory of Open Access Journals (Sweden)
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
DEFF Research Database (Denmark)
Glenthoj, Andreas; Glenthøj, Birte Yding; Mackeprang, Torben
2007-01-01
or intracranial volume, the only significant difference between patients and controls was a Hemisphere x Group interaction for the caudate nucleus at baseline, with controls having larger left than right caudate nuclei and patients having marginally larger right than left caudate volumes. Within patients, the two...... of exposure to medication and in controls at baseline. Caudate nucleus, nucleus accumbens, and putamen volumes were measured. Compared with controls, absolute volumes of interest (VOIs) were smaller in patients at baseline and increased after treatment. However, with controls for age, gender and whole brain...
DEFF Research Database (Denmark)
Glenthoj, Andreas; Glenthoj, Birte Y; Mackeprang, Torben
2007-01-01
of exposure to medication and in controls at baseline. Caudate nucleus, nucleus accumbens, and putamen volumes were measured. Compared with controls, absolute volumes of interest (VOIs) were smaller in patients at baseline and increased after treatment. However, with controls for age, gender and whole brain...... or intracranial volume, the only significant difference between patients and controls was a Hemisphere x Group interaction for the caudate nucleus at baseline, with controls having larger left than right caudate nuclei and patients having marginally larger right than left caudate volumes. Within patients, the two...
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Market power and state costs of HIV/AIDS drugs.
Leibowitz, Arleen A; Sood, Neeraj
2007-03-01
We examine whether U.S. states can use their market power to reduce the costs of supplying prescription drugs to uninsured and underinsured persons with HIV through a public program, the AIDS Drug Assistance Program (ADAP). Among states that purchase drugs from manufacturers and distribute them directly to clients, those that purchase a greater volume pay lower average costs per prescription. Among states depending on retail pharmacies to distribute drugs and then claiming rebates from manufacturers, those that contract with smaller numbers of pharmacy networks have lower average costs. Average costs per prescription do not differ between the two purchase methods.
Westbrook, J I; Li, L; Raban, M Z; Baysari, M T; Mumford, V; Prgomet, M; Georgiou, A; Kim, T; Lake, R; McCullagh, C; Dalla-Pozza, L; Karnon, J; O'Brien, T A; Ambler, G; Day, R; Cowell, C T; Gazarian, M; Worthington, R; Lehmann, C U; White, L; Barbaric, D; Gardo, A; Kelly, M; Kennedy, P
2016-10-21
Medication errors are the most frequent cause of preventable harm in hospitals. Medication management in paediatric patients is particularly complex and consequently potential for harms are greater than in adults. Electronic medication management (eMM) systems are heralded as a highly effective intervention to reduce adverse drug events (ADEs), yet internationally evidence of their effectiveness in paediatric populations is limited. This study will assess the effectiveness of an eMM system to reduce medication errors, ADEs and length of stay (LOS). The study will also investigate system impact on clinical work processes. A stepped-wedge cluster randomised controlled trial (SWCRCT) will measure changes pre-eMM and post-eMM system implementation in prescribing and medication administration error (MAE) rates, potential and actual ADEs, and average LOS. In stage 1, 8 wards within the first paediatric hospital will be randomised to receive the eMM system 1 week apart. In stage 2, the second paediatric hospital will randomise implementation of a modified eMM and outcomes will be assessed. Prescribing errors will be identified through record reviews, and MAEs through direct observation of nurses and record reviews. Actual and potential severity will be assigned. Outcomes will be assessed at the patient-level using mixed models, taking into account correlation of admissions within wards and multiple admissions for the same patient, with adjustment for potential confounders. Interviews and direct observation of clinicians will investigate the effects of the system on workflow. Data from site 1 will be used to develop improvements in the eMM and implemented at site 2, where the SWCRCT design will be repeated (stage 2). The research has been approved by the Human Research Ethics Committee of the Sydney Children's Hospitals Network and Macquarie University. Results will be reported through academic journals and seminar and conference presentations. Australian New Zealand
Average nuclear surface properties
International Nuclear Information System (INIS)
Groote, H. von.
1979-01-01
The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)
Americans' Average Radiation Exposure
International Nuclear Information System (INIS)
2000-01-01
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body
Johnston, Lloyd D.; O'Malley, Patrick M.; Bachman, Jerald G.; Schulenberg, John E.
2011-01-01
The Monitoring the Future (MTF) study involves an ongoing series of national surveys of American adolescents and adults that has provided the nation with a vital window into the important, but largely hidden, problem behaviors of illegal drug use, alcohol use, tobacco use, anabolic steroid use, and psychotherapeutic drug use. For more than a third…
Hu, Maorong; Li, Jun; Eyler, Lisa; Guo, Xiaofeng; Wei, Qingling; Tang, Jingsong; Liu, Feng; He, Zhong; Li, Lihua; Jin, Hua; Liu, Zhening; Wang, Juan; Liu, Fang; Chen, Huafu; Zhao, Jingping
2013-03-01
The shared neuropathological characteristics of patients with schizophrenia and their siblings might represent intermediate phenotypes that could be used to investigate genetic susceptibility to the illness. We sought to discover gray matter volume differences in patients with schizophrenia and their unaffected siblings with voxel-based morphometry (VBM). We recruited antipsychotic drug-naive, first-episode schizophrenia (FES) patients, their unaffected siblings and age-, sex- and handedness-matched healthy controls. We used VBM to investigate differences in gray matter volume among the 3 groups. There were significant gray matter volumetric differences among the 3 groups in bilateral hippocampal and parahippocampal gyri, bilateral middle temporal gyri, and superior temporal gyri (FDR ptemporal gyrus, and volume of this region was not different between siblings and patients. Our findings confirm and extend previous VBM analyses in schizophrenia and it indicate that schizophrenia may be characterized by an abnormal development of cerebral lateralization. Furthermore, these data argue that patients and their unaffected siblings might share decreases in the gray matter volume of the left middle temporal gyrus, and this regional reduction might be a potential endophenotype for schizophrenia. Copyright © 2013 Elsevier B.V. All rights reserved.
Messori, Andrea
2016-08-01
Several cases of expensive drugs designed for large patient populations (e.g. sofosbuvir) have raised a complex question in terms of drug pricing. Even assuming value-based pricing, the treatment with these drugs of all eligible patients would have an immense budgetary impact, which is unsustainable also for the richest countries. This raises the need to reduce the prices of these agents in comparison with those suggested by the value-based approach and to devise new pricing methods that can achieve this goal. The present study discusses in detail the following two methods: (i) The approach based on setting nation-wide budget thresholds for individual innovative agents in which a fixed proportion of the historical pharmaceutical expenditure represents the maximum budget attributable to an innovative treatment; (ii) The approach based on nation-wide price-volume agreements in which drug prices are progressively reduced as more patients receive the treatment. The first approach has been developed in the USA by the Institute for Clinical and Economic Review and has been applied to PCSK9 inhibitors (alirocumab and evolocumab). The second approach has been designed for the Italian market and has found a systematic application to manage the price of ranibizumab, sofosbuvir, and PCSK9 inhibitors. While, in the past, price-volume agreements have been applied only on an empirical basis (i.e. in the absence of any quantitative theoretical rule), more recently some explicit mathematical models have been described. The performance of these models is now being evaluated on the basis of the real-world experiences conducted in some European countries, especially Italy.
1984-01-07
Benowitz, Kuyt and Jacob, 1982; Feyerabend S Russe l l , 1978). 273 10 Measurement of hematocrit was performed as follows: 1) after centrifugation of...Incentives, Washington, D.C.: Winston. Feyerabend , C. , & Russell, M. A. H. (1978). Effect of urinary pH and nicotine excretion rate can plasma...s Feyerabend , c. (1978). Cigarette smoking: A dependence on high-nicotine boll. Drug Metabolism Review, 8_, 29-57. Russell, M. A. H. , Wilson, C
The difference between alternative averages
Directory of Open Access Journals (Sweden)
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
The flattening of the average potential in models with fermions
International Nuclear Information System (INIS)
Bornholdt, S.
1993-01-01
The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)
DEFF Research Database (Denmark)
Sunesen, Vibeke Hougaard; Vedelsdal, Rune; Kristensen, Henning Gjelstrup
2005-01-01
The influence of liquid intake and a lipid-rich meal on the bioavailability of a lipophilic drug was investigated. Danazol was used as the model substance. In a randomized four-way crossover study eight healthy male volunteers received four different treatments with danazol at 2-week intervals fo......-rich meal or extra 800 ml water increased the bioavailability by 400 and 55%, respectively. Gastric emptying times increased in the following order: Standard......The influence of liquid intake and a lipid-rich meal on the bioavailability of a lipophilic drug was investigated. Danazol was used as the model substance. In a randomized four-way crossover study eight healthy male volunteers received four different treatments with danazol at 2-week intervals...... following an overnight fast (one I.V. infusion and three oral treatments). The I.V. formulation contained 50mg danazol solubilized in 40% hydroxypropyl-beta-cyclodextrin. The oral treatments were a Standard treatment, a Standard + 800 ml water treatment and a Standard + lipid-rich meal treatment...
Acheampong, Paul; Cooper, Gill; Khazaeli, Behshad; Lupton, David J; White, Sue; May, Margaret T; Thomas, Simon H L
2013-01-01
Aims To ascertain the effects of the Medicines and Healthcare products Regulatory Agency's (MHRA) safety update in June 2010 on the volume of prescribing of quinine and on indices of quinine toxicity. Methods We analysed quarterly primary care total and quinine prescribing data for England and quinine prescribing volume for individual Primary Care Trusts in the North East of England from 2007/8 to 2011/12 obtained from the ePACT.net database. We also analysed quinine toxicity enquiries to the National Poisons Information Service (NPIS) via Toxbase® and by telephone between 2004/5 and 2011/12. Joinpoint regression and Pearson's correlation tests were used to ascertain changes in trends in prescribing and indices of toxicity and associations between prescribing and indices of toxicity, respectively. Results Total prescribing continued to increase, but annual growth in quinine prescribing in England declined from 6.0 to −0.6% following the MHRA update [difference −0.04 (95% confidence interval −0.07 to −0.01) quinine prescriptions per 100 patients per quarter, P = 0.0111]. Much larger reductions were observed in Primary Care Trusts that introduced comprehensive prescribing reviews. The previously increasing trend in Toxbase® quinine searches was reversed [difference −19.76 (95% confidence interval −39.28 to −9.20) user sessions per quarter, P = 0.0575]. Telephone enquiries to NPIS for quinine have declined, with stabilization of the proportion of moderate to severe cases of quinine poisoning since the update. Conclusions The MHRA advice was followed by limited reductions in the growth in quinine prescribing and in indicators of quinine overdose and toxicity. Quinine prescribing, however, remains common, and further efforts are needed to reduce availability and use. PMID:23594200
Regional averaging and scaling in relativistic cosmology
International Nuclear Information System (INIS)
Buchert, Thomas; Carfora, Mauro
2002-01-01
Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias
How to average logarithmic retrievals?
Directory of Open Access Journals (Sweden)
B. Funke
2012-04-01
Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.
Baseline-dependent averaging in radio interferometry
Wijnholds, S. J.; Willis, A. G.; Salvini, S.
2018-05-01
This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.
Averaging models: parameters estimation with the R-Average procedure
Directory of Open Access Journals (Sweden)
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
Sinha, Vikash K; Vaarties, Karin; De Buck, Stefan S; Fenu, Luca A; Nijsen, Marjoleen; Gilissen, Ron A H J; Sanderson, Wendy; Van Uytsel, Kelly; Hoeben, Eva; Van Peer, Achiel; Mackie, Claire E; Smit, Johan W
2011-05-01
It is imperative that new drugs demonstrate adequate pharmacokinetic properties, allowing an optimal safety margin and convenient dosing regimens in clinical practice, which then lead to better patient compliance. Such pharmacokinetic properties include suitable peak (maximum) plasma drug concentration (C(max)), area under the plasma concentration-time curve (AUC) and a suitable half-life (t(½)). The C(max) and t(½) following oral drug administration are functions of the oral clearance (CL/F) and apparent volume of distribution during the terminal phase by the oral route (V(z)/F), each of which may be predicted and combined to estimate C(max) and t(½). Allometric scaling is a widely used methodology in the pharmaceutical industry to predict human pharmacokinetic parameters such as clearance and volume of distribution. In our previous published work, we have evaluated the use of allometry for prediction of CL/F and AUC. In this paper we describe the evaluation of different allometric scaling approaches for the prediction of C(max), V(z)/F and t(½) after oral drug administration in man. Twenty-nine compounds developed at Janssen Research and Development (a division of Janssen Pharmaceutica NV), covering a wide range of physicochemical and pharmacokinetic properties, were selected. The C(max) following oral dosing of a compound was predicted using (i) simple allometry alone; (ii) simple allometry along with correction factors such as plasma protein binding (PPB), maximum life-span potential or brain weight (reverse rule of exponents, unbound C(max) approach); and (iii) an indirect approach using allometrically predicted CL/F and V(z)/F and absorption rate constant (k(a)). The k(a) was estimated from (i) in vivo pharmacokinetic experiments in preclinical species; and (ii) predicted effective permeability in man (P(eff)), using a Caco-2 permeability assay. The V(z)/F was predicted using allometric scaling with or without PPB correction. The t(½) was estimated from
2010-07-01
... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...
Evaluations of average level spacings
International Nuclear Information System (INIS)
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....
High average power supercontinuum sources
Indian Academy of Sciences (India)
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.
When good = better than average
Directory of Open Access Journals (Sweden)
Don A. Moore
2007-10-01
Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.
Autoregressive Moving Average Graph Filtering
Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert
2016-01-01
One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...
Averaging Robertson-Walker cosmologies
International Nuclear Information System (INIS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-01-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models
Topological quantization of ensemble averages
International Nuclear Information System (INIS)
Prodan, Emil
2009-01-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
The average Indian female nose.
Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh
2011-12-01
This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.
Effect of tank geometry on its average performance
Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.
2018-03-01
The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.
A laser based reusable microjet injector for transdermal drug delivery
Han, Tae-hee; Yoh, Jack J.
2010-05-01
A laser based needle-free liquid drug injection device has been developed. A laser beam is focused inside the liquid contained in the rubber chamber of microscale. The focused laser beam causes explosive bubble growth, and the sudden volume increase in a sealed chamber drives a microjet of liquid drug through the micronozzle. The exit diameter of a nozzle is 125 μm and the injected microjet reaches an average velocity of 264 m/s. This device adds the time-varying feature of microjet to the current state of liquid injection for drug delivery.
Averaging of nonlinearity-managed pulses
International Nuclear Information System (INIS)
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
Directory of Open Access Journals (Sweden)
Zonglin Shen
2016-01-01
Conclusions: The GMV of the brain areas that were related to mood regulation was decreased in the first-episode, drug-naive adult patients with MDD. Adult patients with EOD and LOD exhibited different GMV changes relative to each age-matched comparison group, suggesting depressed adult patients with different age-onset might have different pathological mechanism.
Full Text Available ... Why Is It So Hard to Quit Drugs? Effects of Drugs Drug Use and Other People Drug ... Unborn Children Drug Use and Your Health Other Effects on the Body Drug Use Hurts Brains Drug ...
The average size of ordered binary subgraphs
van Leeuwen, J.; Hartel, Pieter H.
To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a
Crystallography and Drug Design
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 12. Crystallography and Drug Design. K Suguna. General Article Volume 19 Issue 12 December 2014 pp 1093-1103. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/019/12/1093-1103. Keywords.
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Averaging for solitons with nonlinearity management
International Nuclear Information System (INIS)
Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.
2003-01-01
We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations
DSCOVR Magnetometer Level 2 One Minute Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data
DSCOVR Magnetometer Level 2 One Second Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data
Spacetime averaging of exotic singularity universes
International Nuclear Information System (INIS)
Dabrowski, Mariusz P.
2011-01-01
Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.
NOAA Average Annual Salinity (3-Zone)
California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Full Text Available ... Get Addicted to Drugs? Does Addiction Run in Families? Why Is It So Hard to Quit Drugs? ... Drug Use and Other People Drug Use and Families Drug Use and Kids Drug Use and Unborn ...
Full Text Available ... Facts Search form Search Menu Home Drugs That People Abuse Alcohol Facts Bath Salts Facts Cocaine (Coke, ... Drugs? Effects of Drugs Drug Use and Other People Drug Use and Families Drug Use and Kids ...
Full Text Available ... People Drug Use and Families Drug Use and Kids Drug Use and Unborn Children Drug Use and ... Children and Teens Stay Drug-Free Talking to Kids About Drugs: What to Say if You Used ...
Improving consensus structure by eliminating averaging artifacts
Directory of Open Access Journals (Sweden)
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
40 CFR 76.11 - Emissions averaging.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...
Determinants of College Grade Point Averages
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
Computation of the bounce-average code
International Nuclear Information System (INIS)
Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.
1977-01-01
The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended
Multiple-level defect species evaluation from average carrier decay
Debuf, Didier
2003-10-01
An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.
Rotational averaging of multiphoton absorption cross sections
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
Should the average tax rate be marginalized?
Czech Academy of Sciences Publication Activity Database
Feldman, N. E.; Katuščák, Peter
-, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Nonequilibrium statistical averages and thermo field dynamics
International Nuclear Information System (INIS)
Marinaro, A.; Scarpetta, Q.
1984-01-01
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
40 CFR 80.825 - How is the refinery or importer annual average toxics value determined?
2010-07-01
... volume of applicable gasoline produced or imported in batch i. Ti = The toxics value of batch i. n = The number of batches of gasoline produced or imported during the averaging period. i = Individual batch of gasoline produced or imported during the averaging period. (b) The calculation specified in paragraph (a...
Full Text Available ... Treatment and Recovery Resources? Prevention Help Children and Teens Stay Drug-Free Talking to Kids About Drugs: What to Say if You Used Drugs in the Past Drug Use ... Videos Information About Drugs Alcohol ...
... Loss of consciousness Other conditions resulting from drug allergy Less common drug allergy reactions occur days or ... you take the drug. Drugs commonly linked to allergies Although any drug can cause an allergic reaction, ...
Improved averaging for non-null interferometry
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Averaging processes in granular flows driven by gravity
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
Determination of clothing microclimate volume
Daanen, Hein; Hatcher, Kent; Havenith, George
2005-01-01
The average air layer thickness between human skin and clothing is an important factor in heat transfer. The trapped volume between skin and clothing is an estimator for everage air layer thickness. Several techniques are available to determine trapped volume. This study investigates the reliability
Asynchronous Gossip for Averaging and Spectral Ranking
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
An approach to averaging digitized plantagram curves.
Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B
1994-07-01
The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Exploiting scale dependence in cosmological averaging
International Nuclear Information System (INIS)
Mattsson, Teppo; Ronkainen, Maria
2008-01-01
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Average-case analysis of numerical problems
2000-01-01
The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Taylor, Richard D; MacCoss, Malcolm; Lawson, Alastair D G
2014-07-24
We have analyzed the rings, ring systems, and frameworks in drugs listed in the FDA Orange Book to understand the frequency, timelines, molecular property space, and the application of these rings in different therapeutic areas and target classes. This analysis shows that there are only 351 ring systems and 1197 frameworks in drugs that came onto the market before 2013. Furthermore, on average six new ring systems enter drug space each year and approximately 28% of new drugs contain a new ring system. Moreover, it is very unusual for a drug to contain more than one new ring system and the majority of the most frequently used ring systems (83%) were first used in drugs developed prior to 1983. These observations give insight into the chemical novelty of drugs and potentially efficient ways to assess compound libraries and develop compounds from hit identification to lead optimization and beyond.
Model averaging, optimal inference and habit formation
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...
Average beta measurement in EXTRAP T1
International Nuclear Information System (INIS)
Hedin, E.R.
1988-12-01
Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS
International Nuclear Information System (INIS)
2005-01-01
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...
Gibbs equilibrium averages and Bogolyubov measure
International Nuclear Information System (INIS)
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Function reconstruction from noisy local averages
International Nuclear Information System (INIS)
Chen Yu; Huang Jianguo; Han Weimin
2008-01-01
A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.
Multiphase averaging of periodic soliton equations
International Nuclear Information System (INIS)
Forest, M.G.
1979-01-01
The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Essays on model averaging and political economics
Wang, W.
2013-01-01
This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Average Costs versus Net Present Value
E.A. van der Laan (Erwin); R.H. Teunter (Ruud)
2000-01-01
textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives
Average beta-beating from random errors
Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department
2018-01-01
The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic eﬀect on the tune.
Reliability Estimates for Undergraduate Grade Point Average
Westrick, Paul A.
2017-01-01
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
Tendon surveillance requirements - average tendon force
International Nuclear Information System (INIS)
Fulton, J.F.
1982-01-01
Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)
Directory of Open Access Journals (Sweden)
G. H. de Rooij
2009-07-01
Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.
Wynne, Hilary
2005-06-01
Older people are major consumers of drugs and because of this, as well as co-morbidity and age-related changes in pharmacokinetics and pharmacodynamics, are at risk of associated adverse drug reactions. While age does not alter drug absorption in a clinically significant way, and age-related changes in volume of drug distribution and protein binding are not of concern in chronic therapy, reduction in hepatic drug clearance is clinically important. Liver blood flow falls by about 35% between young adulthood and old age, and liver size by about 24-35% over the same period. First-pass metabolism of oral drugs avidly cleared by the liver and clearance of capacity-limited hepatically metabolized drugs fall in parallel with the fall in liver size, and clearance of drugs with a high hepatic extraction ratio falls in parallel with the fall in hepatic blood flow. In normal ageing, in general, activity of the cytochrome P450 enzymes is preserved, although a decline in frail older people has been noted, as well as in association with liver disease, cancer, trauma, sepsis, critical illness and renal failure. As the contribution of age, co-morbidity and concurrent drug therapy to altered drug clearance is impossible to predict in an individual older patient, it is wise to start any drug at a low dose and increase this slowly, monitoring carefully for beneficial and adverse effects.
Statistics on exponential averaging of periodograms
Energy Technology Data Exchange (ETDEWEB)
Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).
Statistics on exponential averaging of periodograms
International Nuclear Information System (INIS)
Peeters, T.T.J.M.; Ciftcioglu, Oe.
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Harima, S; Tanaka, Y
1991-01-01
Since most of the crude drug rhubarb was imported in the Taisho era, similarly to the Meiji era, it was possible to refer to the government statistic data by investigating and elucidating the import volume of the same in the treaty ports at that time. Upon examining the import status in the period from the late part of the Taisho era to the early part of Showa era (around 1920 to 1930), the following transitions were observed. 1) In 1923 (Taisho 12), the Kantoh earthquake happened and it attacked Yokohama where the function as main trade port was almost totally destroyed. ... 2) The earthquake happened just at the time when the tariff protectionism was increasingly promoted in every country. ... 3) Since the port of Yokohama, as trade port, was damaged by the earthquake at that time and the government import statistic data were limited to a partial summing-up, our investigation was referred to the statistic data at the ports of Osaka and Kobe. ...
Weighted estimates for the averaging integral operator
Czech Academy of Sciences Publication Activity Database
Opic, Bohumír; Rákosník, Jiří
2010-01-01
Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231
Average Transverse Momentum Quantities Approaching the Lightfront
Boer, Daniel
2015-01-01
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...
... over-the-counter drug. The FDA evaluates the safety of a drug by looking at Side effects ... clinical trials The FDA also monitors a drug's safety after approval. For you, drug safety means buying ...
... Cocaine Heroin Inhalants Marijuana Prescription drugs, including opioids Drug abuse also plays a role in many major social problems, such as drugged driving, violence, stress, and child abuse. Drug abuse can lead to ...
Full Text Available ... Use and Unborn Children Drug Use and Your Health Other Effects on the Body Drug Use Hurts Brains Drug Use and Mental Health Problems Often Happen Together The Link Between Drug ...
Full Text Available ... Drug Use and Kids Drug Use and Unborn Children Drug Use and Your Health Other Effects on ... Someone Find Treatment and Recovery Resources? Prevention Help Children and Teens Stay Drug-Free Talking to Kids ...
... uses. Other uses of these drugs are abuse. Club drugs are also sometimes used as "date rape" drugs, to make someone unable to say no to or fight back against sexual assault. Abusing these drugs can ...
Average configuration of the geomagnetic tail
International Nuclear Information System (INIS)
Fairfield, D.H.
1979-01-01
Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Changing mortality and average cohort life expectancy
Directory of Open Access Journals (Sweden)
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
Average properties of bidisperse bubbly flows
Serrano-García, J. C.; Mendez-Díaz, S.; Zenit, R.
2018-03-01
Experiments were performed in a vertical channel to study the properties of a bubbly flow composed of two distinct bubble size species. Bubbles were produced using a capillary bank with tubes with two distinct inner diameters; the flow through each capillary size was controlled such that the amount of large or small bubbles could be controlled. Using water and water-glycerin mixtures, a wide range of Reynolds and Weber number ranges were investigated. The gas volume fraction ranged between 0.5% and 6%. The measurements of the mean bubble velocity of each species and the liquid velocity variance were obtained and contrasted with the monodisperse flows with equivalent gas volume fractions. We found that the bidispersity can induce a reduction of the mean bubble velocity of the large species; for the small size species, the bubble velocity can be increased, decreased, or remain unaffected depending of the flow conditions. The liquid velocity variance of the bidisperse flows is, in general, bound by the values of the small and large monodisperse values; interestingly, in some cases, the liquid velocity fluctuations can be larger than either monodisperse case. A simple model for the liquid agitation for bidisperse flows is proposed, with good agreement with the experimental measurements.
Ovarian volume throughout life
DEFF Research Database (Denmark)
Kelsey, Thomas W; Dodwell, Sarah K; Wilkinson, A Graham
2013-01-01
conception to 82 years of age. This model shows that 69% of the variation in ovarian volume is due to age alone. We have shown that in the average case ovarian volume rises from 0.7 mL (95% CI 0.4-1.1 mL) at 2 years of age to a peak of 7.7 mL (95% CI 6.5-9.2 mL) at 20 years of age with a subsequent decline...... to about 2.8 mL (95% CI 2.7-2.9 mL) at the menopause and smaller volumes thereafter. Our model allows us to generate normal values and ranges for ovarian volume throughout life. This is the first validated normative model of ovarian volume from conception to old age; it will be of use in the diagnosis......The measurement of ovarian volume has been shown to be a useful indirect indicator of the ovarian reserve in women of reproductive age, in the diagnosis and management of a number of disorders of puberty and adult reproductive function, and is under investigation as a screening tool for ovarian...
Operator product expansion and its thermal average
Energy Technology Data Exchange (ETDEWEB)
Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)
1998-05-01
QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.
Fluctuations of wavefunctions about their classical average
International Nuclear Information System (INIS)
Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Multistage parallel-serial time averaging filters
International Nuclear Information System (INIS)
Theodosiou, G.E.
1980-01-01
Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)
Time-averaged MSD of Brownian motion
International Nuclear Information System (INIS)
Andreanov, Alexei; Grebenkov, Denis S
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution
Time-dependent angularly averaged inverse transport
International Nuclear Information System (INIS)
Bal, Guillaume; Jollivet, Alexandre
2009-01-01
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7. ...
Bootstrapping Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...
Average Nuclear properties based on statistical model
International Nuclear Information System (INIS)
El-Jaick, L.J.
1974-01-01
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis S.
2012-07-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.
De Luca, G.; Magnus, J.R.
2011-01-01
In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
Beta-energy averaging and beta spectra
International Nuclear Information System (INIS)
Stamatelatos, M.G.; England, T.R.
1976-07-01
A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality
Asymptotic Time Averages and Frequency Distributions
Directory of Open Access Journals (Sweden)
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Chaotic Universe, Friedmannian on the average 2
Energy Technology Data Exchange (ETDEWEB)
Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij
1980-11-01
The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.
Averaging in the presence of sliding errors
International Nuclear Information System (INIS)
Yost, G.P.
1991-08-01
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms
High average power linear induction accelerator development
International Nuclear Information System (INIS)
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs
FEL system with homogeneous average output
Energy Technology Data Exchange (ETDEWEB)
Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph
2018-01-16
A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
[Quetelet, the average man and medical knowledge].
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Angle-averaged Compton cross sections
International Nuclear Information System (INIS)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Reynolds averaged simulation of unsteady separated flow
International Nuclear Information System (INIS)
Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.
2003-01-01
The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation
Angle-averaged Compton cross sections
Energy Technology Data Exchange (ETDEWEB)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-05-07
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.
Allergic reaction - drug (medication); Drug hypersensitivity; Medication hypersensitivity ... A drug allergy involves an immune response in the body that produces an allergic reaction to a medicine. The first time ...
... to quit, they may have withdrawal symptoms like depression, thoughts of suicide, intense drug cravings, sleep problems, and fatigue. The health risks aren't the only downside to study drugs. Students caught with illegal prescription drugs may get suspended ...
Full Text Available ... symptoms of someone with a drug use problem? How Does Drug Use Become an Addiction? What Makes Someone More Likely to Get Addicted to Drugs? Does Addiction Run in Families? Why Is It So Hard to ...
Full Text Available ... Other Effects on the Body Drug Use Hurts Brains Drug Use and Mental Health Problems Often Happen ... to prescription drugs. The addiction slowly took over his life. I need different people around me. To ...
... problem is interactions, which may occur between Two drugs, such as aspirin and blood thinners Drugs and food, such as statins and grapefruit Drugs and supplements, such as ginkgo and blood thinners ...
... Makes Someone More Likely to Get Addicted to Drugs? Does Addiction Run in Families? Why Is It So Hard ... the text to you. This website talks about drug abuse, addiction, and treatment. Watch Videos Information About Drugs Alcohol ...
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
High-average-power solid state lasers
International Nuclear Information System (INIS)
Summers, M.A.
1989-01-01
In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs
The concept of average LET values determination
International Nuclear Information System (INIS)
Makarewicz, M.
1981-01-01
The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)
On spectral averages in nuclear spectroscopy
International Nuclear Information System (INIS)
Verbaarschot, J.J.M.
1982-01-01
In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)
Prescription drugs: issues of cost, coverage, and quality.
Copeland, C
1999-04-01
This Issue Brief closely examines expenditures on prescription drugs, and discusses their potential to substitute for other types of health care services. In addition, it describes employer coverage of prescription drugs, direct-to-consumer advertising of prescription drugs, and potential legislation affecting the prescription drug market. Prescription drug expenditures grew at double-digit rates during almost every year since 1980, accelerating to 14.1 percent in 1997. In contrast, total national health expenditures, hospital service expenditures, and physician service expenditures growth rates decreased from approximately 13 percent in 1980 to less than 5 percent in 1997. Private insurance payments for prescription drugs increased 17.7 percent in 1997, after growing 22.1 percent in 1995 and 18.3 percent in 1996. This growth in prescription drug payments compares with 4 percent or less overall annual growth in private insurance payments for each of those three years. From 1993 to 1997, the overwhelming majority of the increases in expenditures on prescription drugs were attributable to increased volume, mix, and availability of pharmaceutical products. In 1997, these factors accounted for more than 80 percent of the growth in prescription drug expenditures. A leading explanation for the sharp growth in drug expenditures is that prescription drugs are a substitute for other forms of health care. While it is difficult to determine the extent to which this substitution occurs, various studies have associated cost savings with the use of pharmaceutical products in treating specific diseases. Evidence suggests that more appropriate utilization of prescription drugs has the potential to lower total expenditures and improve the quality of care. Also, some studies indicate the U.S. health care system needs to improve the way patients use and physicians prescribe current medications. Prescription drug plans offered by employers are likely to undergo changes to ensure that
Davit, Yohan; Bell, Christopher G.; Byrne, Helen M.; Chapman, Lloyd A.C.; Kimpton, Laura S.; Lang, Georgina E.; Leonard, Katherine H.L.; Oliver, James M.; Pearson, Natalie C.; Shipley, Rebecca J.; Waters, Sarah L.; Whiteley, Jonathan P.; Wood, Brian D.; Quintard, Michel
2013-01-01
doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points
Modelling lidar volume-averaging and its significance to wind turbine wake measurements
DEFF Research Database (Denmark)
Meyer Forsting, Alexander Raul; Troldborg, Niels; Borraccino, Antoine
2017-01-01
gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination...
de Priester, JA; den Boer, JA; Giele, ELW; Christiaans, MHL; Kessels, A; Hasman, A; van Engelshoven, JMA
We evaluated a mathematical algorithm for the generation of medullary signal from raw dynamic magnetic resonance (MR) data. Five healthy volunteers were studied. MR examination consisted of a run of 100 TI-weighted coronal scans (gradient echo: TR/TE 11/3.4 msec, flip angle 60 degrees; slice
A drug utilisation study investigating prescribed daily doses of ...
African Journals Online (AJOL)
and drug groups. Design. Retrospective drug utilisation study using data .... drugs that were prescribed 20 or fewer times during the period under ... occurs in women and men at different ages and with different severity. group. On average, men ...
Medicaid NADAC Pharmacy Drug Pricinig
U.S. Department of Health & Human Services — National Average Drug Acquisition Cost (NADAC) - Below are the NADAC weekly files and the weekly comparison files. Please note that the NADAC file is updated on a...
Compositional dependences of average positron lifetime in binary As-S/Se glasses
International Nuclear Information System (INIS)
Ingram, A.; Golovchak, R.; Kostrzewa, M.; Wacke, S.; Shpotyuk, M.; Shpotyuk, O.
2012-01-01
Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.
Compositional dependences of average positron lifetime in binary As-S/Se glasses
Energy Technology Data Exchange (ETDEWEB)
Ingram, A. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Golovchak, R., E-mail: roman_ya@yahoo.com [Department of Materials Science and Engineering, Lehigh University, 5 East Packer Avenue, Bethlehem, PA 18015-3195 (United States); Kostrzewa, M.; Wacke, S. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Shpotyuk, M. [Lviv Polytechnic National University, 12, Bandery str., Lviv, UA-79013 (Ukraine); Shpotyuk, O. [Institute of Physics of Jan Dlugosz University, 13/15al. Armii Krajowej, Czestochowa, PL-42201 (Poland)
2012-02-15
Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.
Actuator disk model of wind farms based on the rotor average wind speed
DEFF Research Database (Denmark)
Han, Xing Xing; Xu, Chang; Liu, De You
2016-01-01
Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition of ...
African Journals Online (AJOL)
Prof Ezechukwu
2012-02-28
Feb 28, 2012 ... Disorders affecting fluid volume and electrolyte compo- sition are common ... knowledge of the mechanism of action of diuretic drugs and appropriate ... Presence of non-permeable solute will oppose H2O ex- traction. NaCl is actively .... loop not affected. • In oral administration rate and extent of absorption.
Aarthi, G.; Ramachandra Reddy, G.
2018-03-01
In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.
International Nuclear Information System (INIS)
Eimerl, D.
1985-01-01
High-average-power frequency conversion using solid state nonlinear materials is discussed. Recent laboratory experience and new developments in design concepts show that current technology, a few tens of watts, may be extended by several orders of magnitude. For example, using KD*P, efficient doubling (>70%) of Nd:YAG at average powers approaching 100 KW is possible; and for doubling to the blue or ultraviolet regions, the average power may approach 1 MW. Configurations using segmented apertures permit essentially unlimited scaling of average power. High average power is achieved by configuring the nonlinear material as a set of thin plates with a large ratio of surface area to volume and by cooling the exposed surfaces with a flowing gas. The design and material fabrication of such a harmonic generator are well within current technology
Full Text Available ... Drug Use Hurts Brains Drug Use and Mental Health Problems Often Happen Together The Link Between Drug Use and HIV/AIDS Treatment & Recovery Why Does a Person Need Treatment? Does Drug Treatment Work? What Are the Treatment Options? What Is Recovery? ...
Full Text Available ... 4357) at any time to find drug treatment centers near you. I want my daughter to avoid drugs. "Debbie" has been drug-free for years. She wants her daughter to stay away from drugs. But she's afraid ...
Drug plan design incentives among Medicare prescription drug plans.
Huskamp, Haiden A; Keating, Nancy L; Dalton, Jesse B; Chernew, Michael E; Newhouse, Joseph P
2014-07-01
Medicare Advantage prescription drug plans (MA-PDs) and standalone prescription drug plans (PDPs) face different incentives for plan design resulting from the scope of covered benefits (only outpatient drugs for PDPs versus all drug and nondrug services for Medicare Advantage [MA]/MA-PDs). The objective is to begin to explore how MA-PDs and PDPs may be responding to their different incentives related to benefit design. We compared 2012 PDP and MA-PD average formulary coverage, prior authorization (PA) or step therapy use, and copayment requirements for drugs in 6 classes used commonly among Medicare beneficiaries. We primarily used 2012 Prescription Drug Plan Formulary and Pharmacy Network Files and MA enrollment data. 2011 Truven Health MarketScan claims were used to estimate drug prices and to compute drug market share. Average coverage and PA/step rates, and average copayment requirements, were weighted by plan enrollment and drug market share. MA-PDs are generally more likely to cover and less likely to require PA/step for brand name drugs with generic alternatives than PDPs, and MA-PDs often have lower copayment requirements for these drugs. For brands without generics, we generally found no differences in average rates of coverage or PA/step, but MA-PDs were more likely to cover all brands without generics in a class. We found modest, confirmatory evidence suggesting that PDPs and MA-PDs respond to different incentives for plan design. Future research is needed to understand the factors that influence Medicare drug plan design decisions.
Sustainable medication: Microtechnology for personalizing drug treatment
DEFF Research Database (Denmark)
Faralli, Adele; Melander, Fredrik; Andresen, Thomas Lars
2014-01-01
drug dosing” using light-‐polymerizable polymer hydrogels as carriers for free or nanoparticle-‐encapsulated drugs. The total dose is simply controlled by the volume of drug-‐loaded cross-‐ linked hydrogel defined by patterned light from a standard projector (Fig. 1). The concept enables simple...
Factors That Predict Marijuana Use and Grade Point Average among Undergraduate College Students
Coco, Marlena B.
2017-01-01
The purpose of this study was to analyze factors that predict marijuana use and grade point average among undergraduate college students using the Core Institute national database. The Core Alcohol and Drug Survey was used to collect data on students' attitudes, beliefs, and experiences related to substance use in college. The sample used in this…
Pharmacogenomics of GPCR Drug Targets
DEFF Research Database (Denmark)
Hauser, Alexander Sebastian; Chavali, Sreenivas; Masuho, Ikuo
2018-01-01
Natural genetic variation in the human genome is a cause of individual differences in responses to medications and is an underappreciated burden on public health. Although 108 G-protein-coupled receptors (GPCRs) are the targets of 475 (∼34%) Food and Drug Administration (FDA)-approved drugs...... and account for a global sales volume of over 180 billion US dollars annually, the prevalence of genetic variation among GPCRs targeted by drugs is unknown. By analyzing data from 68,496 individuals, we find that GPCRs targeted by drugs show genetic variation within functional regions such as drug......- and effector-binding sites in the human population. We experimentally show that certain variants of μ-opioid and Cholecystokinin-A receptors could lead to altered or adverse drug response. By analyzing UK National Health Service drug prescription and sales data, we suggest that characterizing GPCR variants...
To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space
International Nuclear Information System (INIS)
Khrennikov, Andrei
2007-01-01
We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'
Determining average path length and average trapping time on generalized dual dendrimer
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Potential of high-average-power solid state lasers
International Nuclear Information System (INIS)
Emmett, J.L.; Krupke, W.F.; Sooy, W.R.
1984-01-01
We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels
DEFF Research Database (Denmark)
Pedersen, Sune; Galatius, Soren; Mogelvang, Rasmus
2011-01-01
Use of drug-eluting stents (DES) in patients with ST-elevation myocardial infarction (STEMI) during routine primary percutaneous coronary intervention (pPCI) is controversial.......Use of drug-eluting stents (DES) in patients with ST-elevation myocardial infarction (STEMI) during routine primary percutaneous coronary intervention (pPCI) is controversial....
Substance use - prescription drugs
Substance use disorder - prescription drugs; Substance abuse - prescription drugs; Drug abuse - prescription drugs; Drug use - prescription drugs; Narcotics - substance use; Opioid - substance use; Sedative - substance ...
Full Text Available ... abuse, addiction, and treatment. Watch Videos Information About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/Nicotine Other Drugs You can ...
... different competition is going on: the National Football League (NFL) vs. drug use. Read More » 92 Comments ... Future survey highlights drug use trends among the Nation’s youth for marijuana, alcohol, cigarettes, e-cigarettes (e- ...
Full Text Available ... abuse, addiction, and treatment. Watch Videos Information About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth ... 662-HELP (4357) at any time to find drug treatment centers near you. I want my daughter ...
Full Text Available ... form Search Menu Home Drugs That People Abuse Alcohol Facts Bath Salts Facts Cocaine (Coke, Crack) Facts ... addiction, and treatment. Watch Videos Information About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain ...
Full Text Available ... Ice) Facts Pain Medicine (Oxy, Vike) Facts Spice (K2) Facts Tobacco and Nicotine Facts Other Drugs of ... Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/Nicotine Other Drugs You can call 1- ...
Leviton, Harvey S.
1975-01-01
This article attempts to assemble pertinent information about the drug problem, particularily marihuana. It also focuses on the need for an educational program for drug control with the public schools as the main arena. (Author/HMV)
Full Text Available ... Crank, Ice) Facts Pain Medicine (Oxy, Vike) Facts Spice (K2) Facts Tobacco and Nicotine Facts Other Drugs ... Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/Nicotine Other Drugs You can call ...
Full Text Available ... Nicotine Facts Other Drugs of Abuse What is Addiction? What are some signs and symptoms of someone ... use problem? How Does Drug Use Become an Addiction? What Makes Someone More Likely to Get Addicted ...
Full Text Available ... Numbers and Websites Search Share Listen English Español Information about this page Click on the button that ... about drug abuse, addiction, and treatment. Watch Videos Information About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana ...
Full Text Available ... Home Drugs That People Abuse Alcohol Facts Bath Salts Facts Cocaine (Coke, Crack) Facts Heroin (Smack, Junk) ... treatment. Watch Videos Information About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice ( ...
Full Text Available ... call 1-800-662-HELP (4357) at any time to find drug treatment centers near you. I ... The National Institute on Drug Abuse (NIDA) is part of the National Institutes of Health (NIH) , the ...
Goločorbin-Kon, Svetlana; Vojinović, Aleksandra; Lalić-Popović, Mladena; Pavlović, Nebojša; Mikov, Momir
2013-01-01
Introduction. Drugs used for treatment of rare diseases are known worldwide under the term of orphan drugs because pharmaceutical companies have not been interested in ”adopting” them, that is in investing in research, developing and producing these drugs. This kind of policy has been justified by the fact that these drugs are targeted for small markets, that only a small number of patients is available for clinical trials, and that large investments are required for the development of ...
Expressing intrinsic volumes as rotational integrals
DEFF Research Database (Denmark)
Auneau, Jeremy Michel; Jensen, Eva Bjørn Vedel
2010-01-01
A new rotational formula of Crofton type is derived for intrinsic volumes of a compact subset of positive reach. The formula provides a functional defined on the section of X with a j-dimensional linear subspace with rotational average equal to the intrinsic volumes of X. Simplified forms of the ...
2010-09-03
... hospital that is located outside of a Metropolitan Statistical Area for Medicare payment regulations and... reserved. 4. Section 447.510 is amended by-- A. Republishing paragraph (a) introductory text. B. Revising.... Revising the introductory text of paragraph (b). C. Revising paragraph (c). The revisions read as follows...
20 CFR 404.221 - Computing your average monthly wage.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...
... testing, substance abuse testing, toxicology screen, tox screen, sports doping tests What is it used for? Drug screening is used to find out whether or not a person has taken a certain drug or drugs. It ... Sports organizations. Professional and collegiate athletes usually need to ...
Full Text Available ... to main content Easy-to-Read Drug Facts Search form Search Menu Home Drugs That People Abuse Alcohol Facts ... Past Drug Use Prevention Phone Numbers and Websites Search Share Listen English Español Information about this page ...
Full Text Available ... can call 1-800-662-HELP (4357) at any time to find drug treatment centers near you. I want my daughter to avoid drugs. "Debbie" has been drug-free for years. She wants her daughter to stay away from ...
Full Text Available ... the computer will read the text to you. This website talks about drug abuse, addiction, and treatment. Watch Videos ... I want my daughter to avoid drugs. "Debbie" has been drug-free for years. She wants her daughter to stay away from ...
Full Text Available ... the text to you. This website talks about drug abuse, addiction, and treatment. Watch Videos Information About Drugs ... adicción. English Español About the National Institute on Drug Abuse (NIDA) | About This Website Tools and Resources | Contact ...
Full Text Available ... Drug Use and Mental Health Problems Often Happen Together The Link Between Drug Use and HIV/AIDS Treatment & Recovery Why Does a Person Need Treatment? Does Drug Treatment Work? What Are the Treatment Options? What Is Recovery? ...
Full Text Available ... Makes Someone More Likely to Get Addicted to Drugs? Does Addiction Run in Families? Why Is It So Hard ... the text to you. This website talks about drug abuse, addiction, and treatment. Watch Videos Information About Drugs Alcohol ...
Average and local structure of α-CuI by configurational averaging
International Nuclear Information System (INIS)
Mohn, Chris E; Stoelen, Svein
2007-01-01
Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs
African Journals Online (AJOL)
2009-04-12
Apr 12, 2009 ... ABStrAct. Since drugs became both a public and social issue in Nigeria, fear about both the real and .... drugs as being morally reprehensible, and ..... tice system (see for instance, Shaw, 1995; ..... A cut throat business:.
Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing
2017-09-01
The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.
Stimuli-Responsive Liposomes for Controlled Drug Delivery
Li, Wengang
2014-01-01
Liposomes are promising drug delivery vesicles due to their biodegradibility, large volume and biocompatibility towards both hydrophilic and hydrophobic drugs. They suffer, however, from poor stability which limits their use in controlled delivery
Drugs and Crime: The Relationship of Drug Use and Concomitant Criminal Behavior. Research Issues 17.
Austin, Gregory A., Ed.; Lettieri, Dan J., Ed.
This volume of abstracts of major research and theoretical studies dealing with the relationship between drug use, criminal behavior and the law is concerned with criminal acts other than the possession of, or trafficking in, illicit drugs. Included are 107 selected studies categorized into seven major topic areas: Reviews and Theories, Drug Use…
International Nuclear Information System (INIS)
Connell, P.S.; Kinnison, D.E.; Wuebbles, D.J.; Burley, J.D.; Johnston, H.S.
1992-01-01
We have investigated the effects of incorporating representations of heterogeneous chemical processes associated with stratospheric sulfuric acid aerosol into the LLNL two-dimensional, zonally averaged, model of the troposphere and stratosphere. Using distributions of aerosol surface area and volume density derived from SAGE 11 satellite observations, we were primarily interested in changes in partitioning within the Cl- and N- families in the lower stratosphere, compared to a model including only gas phase photochemical reactions
21 CFR 201.323 - Aluminum in large and small volume parenterals used in total parenteral nutrition.
2010-04-01
... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Aluminum in large and small volume parenterals... for Specific Drug Products § 201.323 Aluminum in large and small volume parenterals used in total parenteral nutrition. (a) The aluminum content of large volume parenteral (LVP) drug products used in total...
Chronic obstructive pulmonary disease - control drugs; Bronchodilators - COPD - control drugs; Beta agonist inhaler - COPD - control drugs; Anticholinergic inhaler - COPD - control drugs; Long-acting inhaler - COPD - control drugs; ...
Analytical expressions for conditional averages: A numerical test
DEFF Research Database (Denmark)
Pécseli, H.L.; Trulsen, J.
1991-01-01
Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...
Experimental demonstration of squeezed-state quantum averaging
DEFF Research Database (Denmark)
Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...
Golocorbin Kon, Svetlana; Vojinović, Aleksandra; Lalić-Popović, Mladena; Pavlović, Nebojsa; Mikov, Momir
2013-01-01
Drugs used for treatment of rare diseases are known worldwide under the term of orphan drugs because pharmaceutical companies have not been interested in "adopting" them, that is in investing in research, developing and producing these drugs. This kind of policy has been justified by the fact that these drugs are targeted for small markets, that only a small number of patients is available for clinical trials, and that large investments are required for the development of drugs meant to treat diseases whose pathogenesis has not yet been clarified in majority of cases. The aim of this paper is to present previous and present status of orphan drugs in Serbia and other countries. THE BEGINNING OF ORPHAN DRUGS DEVELOPMENT: This problem was first recognized by Congress of the United States of America in January 1983, and when the "Orphan Drug Act" was passed, it was a turning point in the development of orphan drugs. This law provides pharmaceutical companies with a series of reliefs, both financial ones that allow them to regain funds invested into the research and development and regulatory ones. Seven years of marketing exclusivity, as a type of patent monopoly, is the most important relief that enables companies to make large profits. There are no sufficient funds and institutions to give financial support to the patients. It is therefore necessary to make health professionals much more aware of rare diseases in order to avoid time loss in making the right diagnosis and thus to gain more time to treat rare diseases. The importance of discovery, development and production of orphan drugs lies in the number of patients whose life quality can be improved significantly by administration of these drugs as well as in the number of potential survivals resulting from the treatment with these drugs.
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...
A time-averaged cosmic ray propagation theory
International Nuclear Information System (INIS)
Klimas, A.J.
1975-01-01
An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de
7 CFR 51.2561 - Average moisture content.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...
Averaging in SU(2) open quantum random walk
International Nuclear Information System (INIS)
Ampadu Clement
2014-01-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT
Averaging in SU(2) open quantum random walk
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
... AIDS Drugs Clinical Trials Apps skip to content Drugs Home Drugs Find information on FDA-approved HIV/ ... infection drugs and investigational HIV/AIDS drugs. Search Drugs Search drug Search Icon What's this? Close Popup ...
21 CFR 864.5950 - Blood volume measuring device.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Blood volume measuring device. 864.5950 Section 864.5950 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES HEMATOLOGY AND PATHOLOGY DEVICES Automated and Semi-Automated Hematology Devices...
Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems
Tobasco, Ian; Goluskin, David; Doering, Charles R.
2018-02-01
For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.
Full Text Available ... Pain Medicine (Oxy, Vike) Facts Spice (K2) Facts Tobacco and Nicotine Facts Other Drugs of Abuse What ... Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/Nicotine Other Drugs You can call 1-800- ...
Full Text Available ... Oxy, Vike) Facts Spice (K2) Facts Tobacco and Nicotine Facts Other Drugs of Abuse What is Addiction? ... Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/Nicotine Other Drugs You can call 1-800-662- ...
Sadée, Wolfgang; El Sayed, Yousry Mahmoud
The limited scope of therapeutic drug-level monitoring in cancer chemotherapy results from the often complex biochemical mechanisms that contribute to antineoplastic activity and obscure the relationships among drug serum levels and therapeutic benefits. Moreover, new agents for cancer chemotherapy are being introduced at a more rapid rate than for the treatment of other diseases, although the successful application of therapeutic drug-level monitoring may require several years of intensive study of the significance of serum drug levels. However, drug level monitoring can be of considerable value during phase I clinical trials of new antineoplastic agents in order to assess drug metabolism, bioavailability, and intersubject variability; these are important parameters in the interpretation of clinical studies, but have no immediate benefit to the patient. High performance liquid chromatography (HPLC) probably represents the most versatile and easily adaptable analytical technique for drug metabolite screening (1). HPLC may therefore now be the method of choice during phase I clinical trials of antineoplastic drugs. For example, within a single week we developed an HPLC assay—using a C18 reverse-phase column, UV detection, and direct serum injection after protein precipitation—for the new radiosensitizer, misonidazole (2).
... Survey Results Synthetic Cannabinoids (K2/Spice) Unpredictable Danger Drug and Alcohol Use in College-Age Adults in 2016 Monitoring the Future 2016 Survey Results Drug and Alcohol Use in College-Age Adults in 2015 View All NIDA Home ...
Data-driven prediction of adverse drug reactions induced by drug-drug interactions.
Liu, Ruifeng; AbdulHameed, Mohamed Diwan M; Kumar, Kamal; Yu, Xueping; Wallqvist, Anders; Reifman, Jaques
2017-06-08
The expanded use of multiple drugs has increased the occurrence of adverse drug reactions (ADRs) induced by drug-drug interactions (DDIs). However, such reactions are typically not observed in clinical drug-development studies because most of them focus on single-drug therapies. ADR reporting systems collect information on adverse health effects caused by both single drugs and DDIs. A major challenge is to unambiguously identify the effects caused by DDIs and to attribute them to specific drug interactions. A computational method that provides prospective predictions of potential DDI-induced ADRs will help to identify and mitigate these adverse health effects. We hypothesize that drug-protein interactions can be used as independent variables in predicting ADRs. We constructed drug pair-protein interaction profiles for ~800 drugs using drug-protein interaction information in the public domain. We then constructed statistical models to score drug pairs for their potential to induce ADRs based on drug pair-protein interaction profiles. We used extensive clinical database information to construct categorical prediction models for drug pairs that are likely to induce ADRs via synergistic DDIs and showed that model performance deteriorated only slightly, with a moderate amount of false positives and false negatives in the training samples, as evaluated by our cross-validation analysis. The cross validation calculations showed an average prediction accuracy of 89% across 1,096 ADR models that captured the deleterious effects of synergistic DDIs. Because the models rely on drug-protein interactions, we made predictions for pairwise combinations of 764 drugs that are currently on the market and for which drug-protein interaction information is available. These predictions are publicly accessible at http://avoid-db.bhsai.org . We used the predictive models to analyze broader aspects of DDI-induced ADRs, showing that ~10% of all combinations have the potential to induce ADRs
Bioinformatics in translational drug discovery.
Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G
2017-08-31
Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).
Agrawal, Vineet; Paul, Manash K; Mukhopadhyay, Anup K
2005-01-01
This article addresses and investigates the dual incorporation of daunorubicin (DR) and 6-mercaptopurine (6-MP) in liposomes for better chemotherapy. These drugs are potential candidates for interaction due to the quinone (H acceptor) and hydroxyl (H donor) groups on DR and 6-MP, respectively. Interactions between the two drugs in solution were monitored by UV/Vis and fluorescence spectroscopy. Interaction between the two drugs inside the liposomes was evaluated by HPLC (for 6-MP) and by fluorescence spectroscopy (for daunorubicin) after phospholipase-mediated liposome lysis. Our results provide evidence for the lack of interaction between the two drugs in solution and in liposomes. The entrapment efficiencies of 6-MP in the neutral Phosphatidyl choline (PC):Cholesterol (Chol):: 2:1 and anionic PC:Chol:Cardiolipin (CL) :: 4:5:1 single and double drug liposomes were found to be 0.4% and 1.5% (on average), respectively. The entrapment efficiencies of DR in the neutral and anionic double drug liposomes were found to be 55% and 31%, respectively. The corresponding entrapment of daunorubicin in the single drug liposomes was found to be 62% on average. Our thin layer chromatography (TLC) and transmission electron microscopy (TEM) results suggest stability of lipid and liposomes, thus pointing plausible existence of double drug liposomes. Cytotoxicity experiments were performed by using both single drug and double drug liposomes. By comparing the results of phase contrast and fluorescence microscopy, it was observed that the double drug liposomes were internalized in the jurkat and Hut78 (highly resistant cell line) leukemia cells as viewed by the fluorescence of daunorubicin. The cytotoxicity was dose dependent and had shown a synergistic effect when double drug liposome was used.
Averaging and sampling for magnetic-observatory hourly data
Directory of Open Access Journals (Sweden)
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
Dictionary Based Segmentation in Volumes
DEFF Research Database (Denmark)
Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley
2015-01-01
We present a method for supervised volumetric segmentation based on a dictionary of small cubes composed of pairs of intensity and label cubes. Intensity cubes are small image volumes where each voxel contains an image intensity. Label cubes are volumes with voxelwise probabilities for a given...... label. The segmentation process is done by matching a cube from the volume, of the same size as the dictionary intensity cubes, to the most similar intensity dictionary cube, and from the associated label cube we get voxel-wise label probabilities. Probabilities from overlapping cubes are averaged...... and hereby we obtain a robust label probability encoding. The dictionary is computed from labeled volumetric image data based on weighted clustering. We experimentally demonstrate our method using two data sets from material science – a phantom data set of a solid oxide fuel cell simulation for detecting...
Drug repurposing based on drug-drug interaction.
Zhou, Bin; Wang, Rong; Wu, Ping; Kong, De-Xin
2015-02-01
Given the high risk and lengthy procedure of traditional drug development, drug repurposing is gaining more and more attention. Although many types of drug information have been used to repurpose drugs, drug-drug interaction data, which imply possible physiological effects or targets of drugs, remain unexploited. In this work, similarity of drug interaction was employed to infer similarity of the physiological effects or targets for the drugs. We collected 10,835 drug-drug interactions concerning 1074 drugs, and for 700 of them, drug similarity scores based on drug interaction profiles were computed and rendered using a drug association network with 589 nodes (drugs) and 2375 edges (drug similarity scores). The 589 drugs were clustered into 98 groups with Markov Clustering Algorithm, most of which were significantly correlated with certain drug functions. This indicates that the network can be used to infer the physiological effects of drugs. Furthermore, we evaluated the ability of this drug association network to predict drug targets. The results show that the method is effective for 317 of 561 drugs that have known targets. Comparison of this method with the structure-based approach shows that they are complementary. In summary, this study demonstrates the feasibility of drug repurposing based on drug-drug interaction data. © 2014 John Wiley & Sons A/S.
Guerreiro, Diogo Frasquilho; Carmo, Ana Lisa; da Silva, Joaquim Alves; Navarro, Rita; Góis, Carlos
2011-01-01
Club drugs are the following substances: Methylenedioxymethamphetamine (MDMA); Methamphetamine; Lysergic Acid Diethylamide (LSD); Ketamine; Gamma-hydroxybutyrate (GHB) and Flunitrazepam. These substances are mainly used by adolescents and young adults, mostly in recreational settings like dance clubs and rave parties. These drugs have diverse psychotropic effects, are associated with several degrees of toxicity, dependence and long term adverse effects. Some have been used for several decades, while others are relatively recent substances of abuse. They have distinct pharmacodynamic and pharmacokinetic properties, are not easy to detect and, many times, the use of club drugs is under diagnosed. Although the use of these drugs is increasingly common, few health professionals feel comfortable with the diagnosis and treatment. The authors performed a systematic literature review, with the goal of synthesising the existing knowledge about club drugs, namely epidemiology, mechanism of action, detection, adverse reactions and treatment. The purpose of this article is creating in Portuguese language a knowledge data base on club drugs, that health professionals of various specialties can use as a reference when dealing with individual with this kind of drug abuse.
Development of Automatic Visceral Fat Volume Calculation Software for CT Volume Data
Directory of Open Access Journals (Sweden)
Mitsutaka Nemoto
2014-01-01
Full Text Available Objective. To develop automatic visceral fat volume calculation software for computed tomography (CT volume data and to evaluate its feasibility. Methods. A total of 24 sets of whole-body CT volume data and anthropometric measurements were obtained, with three sets for each of four BMI categories (under 20, 20 to 25, 25 to 30, and over 30 in both sexes. True visceral fat volumes were defined on the basis of manual segmentation of the whole-body CT volume data by an experienced radiologist. Software to automatically calculate visceral fat volumes was developed using a region segmentation technique based on morphological analysis with CT value threshold. Automatically calculated visceral fat volumes were evaluated in terms of the correlation coefficient with the true volumes and the error relative to the true volume. Results. Automatic visceral fat volume calculation results of all 24 data sets were obtained successfully and the average calculation time was 252.7 seconds/case. The correlation coefficients between the true visceral fat volume and the automatically calculated visceral fat volume were over 0.999. Conclusions. The newly developed software is feasible for calculating visceral fat volumes in a reasonable time and was proved to have high accuracy.
Safety Impact of Average Speed Control in the UK
DEFF Research Database (Denmark)
Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert
2016-01-01
of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....
on the performance of Autoregressive Moving Average Polynomial
African Journals Online (AJOL)
Timothy Ademakinwa
Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
Comparison of Interpolation Methods as Applied to Time Synchronous Averaging
National Research Council Canada - National Science Library
Decker, Harry
1999-01-01
Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...
Light-cone averaging in cosmology: formalism and applications
International Nuclear Information System (INIS)
Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.
2011-01-01
We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe
Full Text Available ... MDMA (Ecstasy, Molly) Facts Meth (Crank, Ice) Facts Pain Medicine (Oxy, Vike) Facts Spice (K2) Facts Tobacco ... Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/Nicotine Other Drugs You ...
Full Text Available ... Facts Bath Salts Facts Cocaine (Coke, Crack) Facts Heroin (Smack, Junk) Facts Marijuana (Weed, Pot) Facts MDMA ( ... Videos Information About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/ ...
Full Text Available ... Cocaine (Coke, Crack) Facts Heroin (Smack, Junk) Facts Marijuana (Weed, Pot) Facts MDMA (Ecstasy, Molly) Facts Meth ( ... Information About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/Nicotine ...
Full Text Available ... Heroin (Smack, Junk) Facts Marijuana (Weed, Pot) Facts MDMA (Ecstasy, Molly) Facts Meth (Crank, Ice) Facts Pain ... About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/Nicotine Other ...
... Alcohol Club Drugs Cocaine Fentanyl Hallucinogens Inhalants Heroin Marijuana MDMA (Ecstasy/Molly) Methamphetamine Opioids Over-the-Counter Medicines Prescription Medicines Steroids (Anabolic) Synthetic Cannabinoids (K2/Spice) Synthetic Cathinones (Bath Salts) Tobacco/ ...
... Alcohol Club Drugs Cocaine Fentanyl Hallucinogens Inhalants Heroin Marijuana MDMA (Ecstasy/Molly) Methamphetamine Opioids Over-the-Counter Medicines Prescription Medicines Steroids (Anabolic) Synthetic Cannabinoids (K2/Spice) Synthetic Cathinones (Bath Salts) Tobacco/ ...
Indian Academy of Sciences (India)
IAS Admin
behind metabolic reactions, importance, and consequences with several ... required for drug action. ... lism, which is catalyzed by enzymes present in the above-men- ... catalyze the transfer of one atom of oxygen to a substrate produc-.
Full Text Available ... Ecstasy, Molly) Facts Meth (Crank, Ice) Facts Pain Medicine (Oxy, Vike) Facts Spice (K2) Facts Tobacco and ... Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/Nicotine Other Drugs You can ...
Full Text Available ... 800-662-HELP (4357) at any time to find drug treatment centers near you. I want my ... is making positive changes in her life. She finds support from family and friends who don't ...
Full Text Available ... That People Abuse Alcohol Facts Bath Salts Facts Cocaine (Coke, Crack) Facts Heroin (Smack, Junk) Facts Marijuana (Weed, ... Watch Videos Information About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) ...
Full Text Available ... Together The Link Between Drug Use and HIV/AIDS Treatment & Recovery Why Does a Person Need Treatment? ... of Health (NIH) , the principal biomedical and behavioral research agency of the United States Government. NIH is ...
... Kids and Teens Pregnancy and Childbirth Women Men Seniors Your Health Resources Healthcare Management End-of-Life Issues Insurance & Bills Self Care Working With Your Doctor Drugs, Procedures & Devices Over-the- ...
Full Text Available ... prescription drugs. The addiction slowly took over his life. I need different people around me. To stop ... marijuana, "Cristina" is making positive changes in her life. She finds support from family and friends who ...
... Drug-resistance testing is also recommended for all pregnant women with HIV before starting HIV medicines and also in some pregnant women already taking HIV medicines. Pregnant women will work with their health ...
Full Text Available ... Cocaine (Coke, Crack) Facts Heroin (Smack, Junk) Facts Marijuana (Weed, Pot) Facts MDMA (Ecstasy, Molly) Facts Meth (Crank, ... Information About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) Tobacco/Nicotine ...
... as hearing colors Impulsive behavior Rapid shifts in emotions Permanent mental changes in perception Rapid heart rate ... Drug use can negatively affect academic performance and motivation to excel in school. Legal issues. Legal problems ...
Full Text Available ... That People Abuse Alcohol Facts Bath Salts Facts Cocaine (Coke, Crack) Facts Heroin (Smack, Junk) Facts Marijuana ( ... Watch Videos Information About Drugs Alcohol Bath Salts Cocaine Heroin Marijuana MDMA Meth Pain Medicines Spice (K2) ...
Delineation of facial archetypes by 3d averaging.
Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G
2004-10-01
The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.
Proposed average values of some engineering properties of palm
African Journals Online (AJOL)
2012-07-02
Jul 2, 2012 ... ture resistance, toughness, deformation, and hardness of palm ... useful in the determination of the amount of heat re- quired for ... to measure the volume required for the investigation. A result of .... vanized steel, and glass.
SPATIAL DISTRIBUTION OF THE AVERAGE RUNOFF IN THE IZA AND VIȘEU WATERSHEDS
Directory of Open Access Journals (Sweden)
HORVÁTH CS.
2015-03-01
Full Text Available The average runoff represents the main parameter with which one can best evaluate an area’s water resources and it is also an important characteristic in al river runoff research. In this paper we choose a GIS methodology for assessing the spatial evolution of the average runoff, using validity curves we identifies three validity areas in which the runoff changes differently with altitude. The tree curves were charted using the average runoff values of 16 hydrometric stations from the area, eight in the Vișeu and eight in the Iza river catchment. Identifying the appropriate areas of the obtained correlations curves (between specific average runoff and catchments mean altitude allowed the assessment of potential runoff at catchment level and on altitudinal intervals. By integrating the curves functions in to GIS we created an average runoff map for the area; from which one can easily extract runoff data using GIS spatial analyst functions. The study shows that from the three areas the highest runoff corresponds with the third zone but because it’s small area the water volume is also minor. It is also shown that with the use of the created runoff map we can compute relatively quickly correct runoff values for areas without hydrologic control.
Ferguson, Patricia, Ed.; And Others
The National Institute on Drug Abuse presents this report as the fifth in a series intended to summarize the empirical research findings and major theoretical approaches relating to the the issues of drug use and abuse. Included in this volume are summaries of the major research findings concerning the effects of nonmedical drug use on pregnancy.…
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Average stress in a Stokes suspension of disks
Prosperetti, Andrea
2004-01-01
The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is
47 CFR 1.959 - Computation of average terrain elevation.
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...
47 CFR 80.759 - Average terrain elevation.
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...
The average covering tree value for directed graph games
Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf
We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering
The Average Covering Tree Value for Directed Graph Games
Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.
2012-01-01
Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all
18 CFR 301.7 - Average System Cost methodology functionalization.
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...
Analytic computation of average energy of neutrons inducing fission
International Nuclear Information System (INIS)
Clark, Alexander Rich
2016-01-01
The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.
An alternative scheme of the Bogolyubov's average method
International Nuclear Information System (INIS)
Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.
1990-01-01
In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail
2015-01-01
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Self-similarity of higher-order moving averages
Arianos, Sergio; Carbone, Anna; Türk, Christian
2011-10-01
In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).
Anomalous behavior of q-averages in nonextensive statistical mechanics
International Nuclear Information System (INIS)
Abe, Sumiyoshi
2009-01-01
A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases
Bootstrapping pre-averaged realized volatility under market microstructure noise
DEFF Research Database (Denmark)
Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour
The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...
Changes in the air cell volume of artificially incubated ostrich eggs ...
African Journals Online (AJOL)
A total of 2160 images of candled, incubated ostrich eggs were digitized to determine the percentage of egg volume occupied by the air cell at different stages of incubation. The air cell on average occupied 2.5% of the volume of fresh eggs. For eggs that hatched successfully, this volume increased to an average of 24.4% ...
The importance of the mean platelet volume in the diagnosis of ...
African Journals Online (AJOL)
EB
2013-09-03
Sep 3, 2013 ... The importance of the mean platelet volume in the diagnosis of ... terms of the focus of the study, hemoglobin, neutrophil count, mean cell volume (MCV), red cell distribution .... hormone replacement therapy and some drugs,.
21 CFR 876.1800 - Urine flow or volume measuring system.
2010-04-01
... volume measuring system. (a) Identification. A urine flow or volume measuring system is a device that measures directly or indirectly the volume or flow of urine from a patient, either during the course of... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Urine flow or volume measuring system. 876.1800...
The growth of the mean average crossing number of equilateral polygons in confinement
International Nuclear Information System (INIS)
Arsuaga, J; Borgo, B; Scharein, R; Diao, Y
2009-01-01
The physical and biological properties of collapsed long polymer chains as well as of highly condensed biopolymers (such as DNA in all organisms) are known to be determined, at least in part, by their topological and geometrical properties. With this purpose of characterizing the topological properties of such condensed systems equilateral random polygons restricted to confined volumes are often used. However, very few analytical results are known. In this paper, we investigate the effect of volume confinement on the mean average crossing number (ACN) of equilateral random polygons. The mean ACN of knots and links under confinement provides a simple alternative measurement for the topological complexity of knots and links in the statistical sense. For an equilateral random polygon of n segments without any volume confinement constrain, it is known that its mean ACN (ACN) is of the order 3/16 n log n + O(n). Here we model the confining volume as a simple sphere of radius R. We provide an analytical argument which shows that (ACN) of an equilateral random polygon of n segments under extreme confinement (meaning R 2 ). We propose to model the growth of (ACN) as a(R)n 2 + b(R)nln(n) under a less-extreme confinement condition, where a(R) and b(R) are functions of R with R being the radius of the confining sphere. Computer simulations performed show a fairly good fit using this model.
Cremers, Serge; Aronson, Jeffrey K
2017-08-01
Estimates of the frequencies of rare disorders vary from country to country; the global average defined prevalence is 40 per 100 000 (0.04%). Some occur in only one or a few patients. However, collectively rare disorders are fairly common, affecting 6-8% of the US population, or about 30 million people, and a similar number in the European Union. Most of them affect children and most are genetically determined. Diagnosis can be difficult, partly because of variable presentations and partly because few clinicians have experience of individual rare disorders, although they may be assisted by searching databases. Relatively few rare disorders have specific pharmacological treatments (so-called orphan drugs), partly because of difficulties in designing trials large enough to determine benefits and harms alike. Incentives have been introduced to encourage the development of orphan drugs, including tax credits and research aids, simplification of marketing authorization procedures and exemption from fees, and extended market exclusivity. Consequently, the number of applications for orphan drugs has grown, as have the costs of using them, so much so that treatments may not be cost-effective. It has therefore been suggested that not-for-profit organizations that are socially motivated to reduce those costs should be tasked with producing them. A growing role for patient organizations, improved clinical and translational infrastructures, and developments in genetics have also contributed to successful drug development. The translational discipline of clinical pharmacology is an essential component in drug development, including orphan drugs. Clinical pharmacologists, skilled in basic pharmacology and its links to clinical medicine, can be involved at all stages. They can contribute to the delineation of genetic factors that determine clinical outcomes of pharmacological interventions, develop biomarkers, design and perform clinical trials, assist regulatory decision
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Lateral dispersion coefficients as functions of averaging time
International Nuclear Information System (INIS)
Sheih, C.M.
1980-01-01
Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion
2010-07-01
... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...
Average inactivity time model, associated orderings and reliability properties
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Average L-shell fluorescence, Auger, and electron yields
International Nuclear Information System (INIS)
Krause, M.O.
1980-01-01
The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization
Simultaneous inference for model averaging of derived parameters
DEFF Research Database (Denmark)
Jensen, Signe Marie; Ritz, Christian
2015-01-01
Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...
Salecker-Wigner-Peres clock and average tunneling times
International Nuclear Information System (INIS)
Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.
2011-01-01
The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).
Time average vibration fringe analysis using Hilbert transformation
International Nuclear Information System (INIS)
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-01-01
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Average multiplications in deep inelastic processes and their interpretation
International Nuclear Information System (INIS)
Kiselev, A.V.; Petrov, V.A.
1983-01-01
Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity
Fitting a function to time-dependent ensemble averaged data
DEFF Research Database (Denmark)
Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders
2018-01-01
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....
Average wind statistics for SRP area meteorological towers
International Nuclear Information System (INIS)
Laurinat, J.E.
1987-01-01
A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics
Drug use in first pregnancy and lactation
DEFF Research Database (Denmark)
Olesen, Charlotte; Steffensen, Flemming Hald; Nielsen, Gunnar Lauge
1999-01-01
pregnancy 44.2% of the women received prescriptions for at least one drug. Users received 2.6 prescriptions on average during pregnancy: 5% of the users redeemed 24.2% of all prescriptions. The proportion of women who redeemed prescriptions for more than three different drugs was 2.7%. The majority.......7%). CONCLUSION: A high proportion of the women received drugs during pregnancy. The pattern of drug use within the Anatomical Therapeutical Chemical (ATC) groups changed, i.e. the amount of broad spectrum antibiotics decreased and the proportion of prescriptions for local use increased. A small proportion...... of women redeemed prescriptions for more than three different drugs during pregnancy....
Legal Drugs Are Good Drugs and Illegal Drugs Are Bad Drugs
Indrati, Dina; Prasetyo, Herry
2011-01-01
ABSTRACT : Labelling drugs are important issue nowadays in a modern society. Although it is generally believed that legal drugs are good drugs and illegal drugs are bad drugs, it is evident that some people do not aware about the side effects of drugs used. Therefore, a key contention of this philosophical essay is that explores harms minimisation policy, discuss whether legal drugs are good drugs and illegal drugs are bad drugs and explores relation of drugs misuse in a psychiatric nursing s...
Ménétré, S; Weber, M; Socha, M; Le Tacon, S; May, I; Schweitzer, C; Demoré, B
2018-04-01
In hospitals, the nursing staff is often confronted with the problem of the preparation and administration of drugs for their pediatric patients because of the lack of indication, pediatric dosage, and appropriate galenic form. The goal of this study was to give an overview of the nurses' preparation habits in pediatric units and highlight their daily problems. This single-center prospective study was conducted through an observation of the nursing staff during the drug preparation process in medicine, surgery and intensive care units. We included 91 patients (55 boys and 36 girls), with an average age of 6.3 years (youngest child, 10 days old; oldest child, 18 years old). We observed a mean 2.16 drug preparations per patient [1-5]. We collected 197 observation reports regarding 66 injectable drugs and 131 oral drugs (71 liquid forms and 60 solid forms). The majority of these reports concerned central nervous system drugs (63/197), metabolism and digestive system drugs (50/197), and anti-infective drugs (46/197). The study highlights the nurses' difficulties: modification of the solid galenic forms, lack of knowledge on oral liquid form preservation or reconstitution methods, withdrawal of small volumes, and vague and noncompliant labeling. This study led to the creation of a specific working group for pediatrics. This multidisciplinary team meets on a regular basis to work toward improving the current habits to both simplify and secure drug administration to hospitalized children. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Drugs@FDA: FDA Approved Drug Products
... Cosmetics Tobacco Products Home Drug Databases Drugs@FDA Drugs@FDA: FDA Approved Drug Products Share Tweet Linkedin Pin it More sharing ... Download Drugs@FDA Express for free Search by Drug Name, Active Ingredient, or Application Number Enter at ...
Lam, Stephanie Phuong; Roosta, Natalie; Nielsen, Mikkel Fuhr; Meyer, Maria Holmgaard; Friis, Katrine Birk
2016-01-01
In recent years, students around the world, started to use preparations as Ritalin and Modafinil,also known as study drugs, to improve their cognitive abilities1. It is a common use among thestudents in United States of America, but it is a new tendency in Denmark. Our main focus is tolocate whether study drugs needs to be legalized in Denmark or not. To investigate this ourstarting point is to understand central ethical arguments in the debate. We have chosen twoarguments from Nick Bostrom a...
Liu, Zhanyu
2017-09-01
By analyzing the current hospital anti hepatitis drug use, dosage, indications and drug resistance, this article studied the drug inventory management and cost optimization. The author used drug utilization evaluation method, analyzed the amount and kind distribution of anti hepatitis drugs and made dynamic monitoring of inventory. At the same time, the author puts forward an effective scheme of drug classification management, uses the ABC classification method to classify the drugs according to the average daily dose of drugs, and implements the automatic replenishment plan. The design of pharmaceutical services supply chain includes drug procurement platform, warehouse management system and connect to the hospital system through data exchange. Through the statistical analysis of drug inventory, we put forward the countermeasures of drug logistics optimization. The results showed that drug replenishment plan can effectively improve drugs inventory efficiency.
Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan
2017-12-01
Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.
SU-D-201-04: Study On the Impact of Tumor Shape and Size On Drug Delivery to Pancreatic Tumors
International Nuclear Information System (INIS)
Soltani, M; Bazmara, H; Sefidgar, M; Subramaniam, R; Rahmim, A
2015-01-01
Purpose: Drug delivery to solid tumors can be expressed physically using transport phenomena such as convection and diffusion for the drug of interest within extracellular matrices. We aimed to carefully model these phenomena, and to investigate the effect of tumor shape and size on drug delivery to solid tumors in the pancreas. Methods: In this study, multiple tumor geometries as obtained from clinical PET/CT images were considered. An advanced numerical method was used to simultaneously solve fluid flow and solute transport equations. Data from n=45 pancreatic cancer patients with non-resectable locoregional disease were analyzed, and geometrical information from the tumors including size, shape, and aspect ratios were classified. To investigate effect of tumor shape, tumors with similar size but different shapes were selected and analyzed. Moreover, to investigate effect of tumor size, tumors with similar shapes but different sizes, ranging from 1 to 77 cm 3 , were selected and analyzed. A hypothetical tumor similar to one of the analyzed tumors, but scaled to reduce its size below 0.2 cm 3 , was also analyzed. Results: The results showed relatively similar average drug concentration profiles in tumors with different sizes. Generally, smaller tumors had higher absolute drug concentration. In the hypothetical tumor, with volume less than 0.2 cm 3 , the average drug concentration was 20% higher in comparison to its counterparts. For the various real tumor geometries, however, the maximum difference between average drug concentrations was 10% for the smallest and largest tumors. Moreover, the results demonstrated that for pancreatic tumors the shape is not significant. The negligible difference of drug concentration in different tumor shapes was due to the minimum effect of convection in pancreatic tumors. Conclusion: In tumors with different sizes, smaller tumors have higher drug delivery; however, the impact of tumor shape in the case of pancreatic tumors is not
SU-D-201-04: Study On the Impact of Tumor Shape and Size On Drug Delivery to Pancreatic Tumors
Energy Technology Data Exchange (ETDEWEB)
Soltani, M [ohns Hopkins University School of Medicine, Baltimore, Maryland, and KNT university, Tehran (Iran, Islamic Republic of); Bazmara, H [KNT university, Tehran (Iran, Islamic Republic of); Sefidgar, M [IKI University, Qazvin (Iran, Islamic Republic of); Subramaniam, R; Rahmim, A [Johns Hopkins University School of Medicine, Baltimore, MD (United States)
2015-06-15
Purpose: Drug delivery to solid tumors can be expressed physically using transport phenomena such as convection and diffusion for the drug of interest within extracellular matrices. We aimed to carefully model these phenomena, and to investigate the effect of tumor shape and size on drug delivery to solid tumors in the pancreas. Methods: In this study, multiple tumor geometries as obtained from clinical PET/CT images were considered. An advanced numerical method was used to simultaneously solve fluid flow and solute transport equations. Data from n=45 pancreatic cancer patients with non-resectable locoregional disease were analyzed, and geometrical information from the tumors including size, shape, and aspect ratios were classified. To investigate effect of tumor shape, tumors with similar size but different shapes were selected and analyzed. Moreover, to investigate effect of tumor size, tumors with similar shapes but different sizes, ranging from 1 to 77 cm{sup 3}, were selected and analyzed. A hypothetical tumor similar to one of the analyzed tumors, but scaled to reduce its size below 0.2 cm{sup 3}, was also analyzed. Results: The results showed relatively similar average drug concentration profiles in tumors with different sizes. Generally, smaller tumors had higher absolute drug concentration. In the hypothetical tumor, with volume less than 0.2 cm{sup 3}, the average drug concentration was 20% higher in comparison to its counterparts. For the various real tumor geometries, however, the maximum difference between average drug concentrations was 10% for the smallest and largest tumors. Moreover, the results demonstrated that for pancreatic tumors the shape is not significant. The negligible difference of drug concentration in different tumor shapes was due to the minimum effect of convection in pancreatic tumors. Conclusion: In tumors with different sizes, smaller tumors have higher drug delivery; however, the impact of tumor shape in the case of pancreatic
Multiplexed Dosing Assays by Digitally Definable Hydrogel Volumes
DEFF Research Database (Denmark)
Faralli, Adele; Melander, Fredrik; Larsen, Esben Kjær Unmack
2016-01-01
Stable and low-cost multiplexed drug sensitivity assays using small volumes of cells or tissue are in demand for personalized medicine, including patientspecific combination chemotherapy. Spatially defined projected light photopolymerization of hydrogels with embedded active compounds is introduc...
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
High Average Power Fiber Laser for Satellite Communications, Phase I
National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...
A time averaged background compensator for Geiger-Mueller counters
International Nuclear Information System (INIS)
Bhattacharya, R.C.; Ghosh, P.K.
1983-01-01
The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)
Time averaging, ageing and delay analysis of financial time series
Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf
2017-06-01
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
Historical Data for Average Processing Time Until Hearing Held
Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...
GIS Tools to Estimate Average Annual Daily Traffic
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
A high speed digital signal averager for pulsed NMR
International Nuclear Information System (INIS)
Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.
1978-01-01
A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)
The average-shadowing property and topological ergodicity for flows
International Nuclear Information System (INIS)
Gu Rongbao; Guo Wenjing
2005-01-01
In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic
Fluctuations of trading volume in a stock market
Hong, Byoung Hee; Lee, Kyoung Eun; Hwang, Jun Kyung; Lee, Jae Woo
2009-03-01
We consider the probability distribution function of the trading volume and the volume changes in the Korean stock market. The probability distribution function of the trading volume shows double peaks and follows a power law, P(V/)∼( at the tail part of the distribution with α=4.15(4) for the KOSPI (Korea composite Stock Price Index) and α=4.22(2) for the KOSDAQ (Korea Securities Dealers Automated Quotations), where V is the trading volume and is the monthly average value of the trading volume. The second peaks originate from the increasing trends of the average volume. The probability distribution function of the volume changes also follows a power law, P(Vr)∼Vr-β, where Vr=V(t)-V(t-T) and T is a time lag. The exponents β depend on the time lag T. We observe that the exponents β for the KOSDAQ are larger than those for the KOSPI.
Application of Bayesian approach to estimate average level spacing
International Nuclear Information System (INIS)
Huang Zhongfu; Zhao Zhixiang
1991-01-01
A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out
Annual average equivalent dose of workers form health area
International Nuclear Information System (INIS)
Daltro, T.F.L.; Campos, L.L.
1992-01-01
The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)
A precise measurement of the average b hadron lifetime
Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G
1996-01-01
An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.
Bivariate copulas on the exponentially weighted moving average control chart
Directory of Open Access Journals (Sweden)
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Directory of Open Access Journals (Sweden)
Tellier Yoann
2018-01-01
Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien
2018-04-01
The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.
The average action for scalar fields near phase transitions
International Nuclear Information System (INIS)
Wetterich, C.
1991-08-01
We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)
Wave function collapse implies divergence of average displacement
Marchewka, A.; Schuss, Z.
2005-01-01
We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.
Average geodesic distance of skeleton networks of Sierpinski tetrahedron
Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao
2018-04-01
The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.
International Nuclear Information System (INIS)
Dunn, L.F.; Taylor, M.L.; Kron, T.; Franich, R.
2010-01-01
Full text: Anatomic motion during a radiotherapy treatment is one of the more significant challenges in contemporary radiation therapy. For tumours of the lung, motion due to patient respiration makes both accurate planning and dose delivery difficult. One approach is to use the maximum intensity projection (MIP) obtained from a 40 computed tomography (CT) scan and then use this to determine the treatment volume. The treatment is then planned on a 4DCT average reco struction, rather than assuming the entire ITY has a uniform tumour density. This raises the question: how well does planning on a 'blurred' distribution of density with CT values greater than lung density but less than tumour density match the true case of a tumour moving within lung tissue? The aim of this study was to answer this question, determining the dosimetric impact of using a 4D-CT average reconstruction as the basis for a radiotherapy treatment plan. To achieve this, Monte-Carlo sim ulations were undertaken using GEANT4. The geometry consisted of a tumour (diameter 30 mm) moving with a sinusoidal pattern of amplitude = 20 mm. The tumour's excursion occurs within a lung equivalent volume beyond a chest wall interface. Motion was defined parallel to a 6 MY beam. This was then compared to a single oblate tumour of a magnitude determined by the extremes of the tumour motion. The variable density of the 4DCT average tumour is simulated by a time-weighted average, to achieve the observed density gradient. The generic moving tumour geometry is illustrated in the Figure.
Maddix, Danielle C.; Sampaio, Luiz; Gerritsen, Margot
2018-05-01
The degenerate parabolic Generalized Porous Medium Equation (GPME) poses numerical challenges due to self-sharpening and its sharp corner solutions. For these problems, we show results for two subclasses of the GPME with differentiable k (p) with respect to p, namely the Porous Medium Equation (PME) and the superslow diffusion equation. Spurious temporal oscillations, and nonphysical locking and lagging have been reported in the literature. These issues have been attributed to harmonic averaging of the coefficient k (p) for small p, and arithmetic averaging has been suggested as an alternative. We show that harmonic averaging is not solely responsible and that an improved discretization can mitigate these issues. Here, we investigate the causes of these numerical artifacts using modified equation analysis. The modified equation framework can be used for any type of discretization. We show results for the second order finite volume method. The observed problems with harmonic averaging can be traced to two leading error terms in its modified equation. This is also illustrated numerically through a Modified Harmonic Method (MHM) that can locally modify the critical terms to remove the aforementioned numerical artifacts.
International Nuclear Information System (INIS)
Sato, Kaoru; Manabe, Kentaro; Endo, Akira
2012-01-01
Average adult Japanese male (JM-103) and female (JF-103) voxel (volume pixel) phantoms newly constructed at the Japan Atomic Energy Agency (JAEA) have average characteristics of body sizes and organ masses in adult Japanese. In JM-103 and JF-103, several organs and tissues were newly modeled for dose assessments based on tissue weighting factors of the 2007 Recommendations of the International Commission on Radiological Protection(ICRP). In this study, SAFs for thyroid, stomach, lungs and lymphatic nodes of JM-103 and JF-103 phantoms were calculated, and were compared with those of other adult Japanese phantoms based on individual medical images. In most cases, differences in SAFs between JM-103, JF-103 and other phantoms were about several tens percent, and was mainly attributed to mass differences of organs, tissues and contents. Therefore, it was concluded that SAFs of JM-103 and JF-103 represent those of average adult Japanese and that the two phantoms are applied to dose assessment for average adult Japanese on the basis of the 2007 Recommendations. (author)
Full Text Available ... Phone Numbers and Websites Search Share Listen English Español Information about this page Click on the button ... sobre el abuso de drogas, y adicción. English Español About the National Institute on Drug Abuse (NIDA) | ...
African Journals Online (AJOL)
Angel_D
tests (LFTs) to monitor hepatotoxicity (liver [hepatic] damage) is uncommon in many resource-poor ... cholesterol ester storage disease. ... The problem with many patients is that they are taking several drugs often ... Urine, saliva and other body fluids may be coloured orange-red: this can be very alarming to patients.
Gorter, J.A.; Potschka, H.; Noebels, J.L.; Avoli, M.; Rogawski, M.A.; Olsen, R.W.; Delgado-Escueta, A.V.
2012-01-01
Drug resistance remains to be one of the major challenges in epilepsy therapy. Identification of factors that contribute to therapeutic failure is crucial for future development of novel therapeutic strategies for difficult-to-treat epilepsies. Several clinical studies have shown that high seizure
Indian Academy of Sciences (India)
preventing disease in human beings or in animals. In the process ... of requirement. In the process, they may cause toxic side effects. .... the liver to release the physiologically active drug. Similarly ... patients addicted to alcohol. However, it is a ...
Full Text Available ... Prevention Phone Numbers and Websites Search Share Listen English Español Information about this page Click on the ... información sobre el abuso de drogas, y adicción. English Español About the National Institute on Drug Abuse ( ...
Price Sensitivity of Demand for Prescription Drugs
DEFF Research Database (Denmark)
Skipper, Lars; Simonsen, Marianne; Skipper, Niels
This paper investigates price sensitivity of demand for prescription drugs using drug purchase records for at 20% random sample of the Danish population. We identify price responsiveness by exploiting exogenous variation in prices caused by kinked reimbursement schemes and implement a regression ...... education and income are, however, more responsive to the price. Also, essential drugs that prevent deterioration in health and prolong life have lower associated average price sensitivity....
Impact of orphan drugs on Latvian budget.
Logviss, Konstantins; Krievins, Dainis; Purvina, Santa
2016-05-11
Number of orphan medicinal products on the market and number of rare disease patients, taking these usually expensive products, are increasing. As a result, budget impact of orphan drugs is growing. This factor, along with the cost-effectiveness of orphan drugs, is often considered in the reimbursement decisions, directly affecting accessibility of rare disease therapies. The current study aims to assess the budget impact of orphan drugs in Latvia. Our study covered a 5-year period, from 2010 to 2014. Impact of orphan drugs on Latvian budget was estimated from the National Health Service's perspective. It was calculated in absolute values and relative to total pharmaceutical market and total drug reimbursement budget. A literature review was performed for comparison with other European countries. Orphan drug annual expenditure ranged between EUR 2.065 and 3.065 million, with total 5-year expenditure EUR 12.467 million. It constituted, on average, 0.84 % of total pharmaceutical market and 2.14 % of total drug reimbursement budget, respectively. Average annual per patient expenditures varied widely, from EUR 1 534 to EUR 580 952. The most costly treatment was enzyme replacement therapy (Elaprase) for MPS II. Glivec had the highest share (34 %) of the total orphan drug expenditure. Oncological drugs represented more than a half of the total orphan drug expenditure, followed by drugs for metabolic and endocrine conditions and medicines for cardiopulmonary diseases. Three indications: Ph+ CML, MPS II, and PAH accounted for nearly 90 % of the total orphan drug expenditure. Budget impact of orphan drugs in Latvia is very small. It increased slightly over a period of five years, due to the slight increase in the number of patients and the number of orphan drugs reimbursed. Current Latvian drug reimbursement system is not sufficient for most orphan drugs.
Liu, Yao; Galárraga, Omar
2017-03-01
The efficacy of low- and middle-income countries’ (LMIC) national drug policies in managing antiretroviral (ARV) pharmaceutical prices is not well understood. Though ARV drug prices have been declining in LMIC over the past decade, little research has been done on the role of their national drug policies. This study aims to (i) analyse global ARV prices from 2004 to 2013 and (ii) examine the relationship of national drug policies to ARV prices. Analysis of ARV drug prices utilized data from the Global Price Reporting Mechanism from the World Health Organization (WHO). Ten of the most common ARV drugs (first-line and second-line) were selected. National drug policies were also assessed for 12 countries in the South African Development Community (SADC), which self-reported their policies through WHO surveys. The best predictor of ARV drug price was generic status—the generic versions of 8 out of 10 ARV drugs were priced lower than branded versions. However, other factors such as transaction volume, HIV prevalence, national drug policies and PEPFAR/CHAI involvement were either not associated with ARV drug price or were not consistent predictors of price across different ARV drugs. In the context of emerging international trade agreements, which aim to strengthen patent protections internationally and potentially delay the sale of generic drugs in LMIC, this study shines a spotlight on the importance of generic drugs in controlling ARV prices. Further research is needed to understand the impact of national drug policies on ARV prices.
Drug-drug interactions involving lysosomes: mechanisms and potential clinical implications.
Logan, Randall; Funk, Ryan S; Axcell, Erick; Krise, Jeffrey P
2012-08-01
Many commercially available, weakly basic drugs have been shown to be lysosomotropic, meaning they are subject to extensive sequestration in lysosomes through an ion trapping-type mechanism. The extent of lysosomal trapping of a drug is an important therapeutic consideration because it can influence both activity and pharmacokinetic disposition. The administration of certain drugs can alter lysosomes such that their accumulation capacity for co-administered and/or secondarily administered drugs is altered. In this review the authors explore what is known regarding the mechanistic basis for drug-drug interactions involving lysosomes. Specifically, the authors address the influence of drugs on lysosomal pH, volume and lipid processing. Many drugs are known to extensively accumulate in lysosomes and significantly alter their structure and function; however, the therapeutic and toxicological implications of this remain controversial. The authors propose that drug-drug interactions involving lysosomes represent an important potential source of variability in drug activity and pharmacokinetics. Most evaluations of drug-drug interactions involving lysosomes have been performed in cultured cells and isolated tissues. More comprehensive in vivo evaluations are needed to fully explore the impact of this drug-drug interaction pathway on therapeutic outcomes.
... use of these drugs is a form of drug abuse. Medicines that are for treating a health problem ... about local resources. Alternative Names Overdose from drugs; Drug abuse first aid References Myck MB. Hallucinogens and drugs ...
Average Soil Water Retention Curves Measured by Neutron Radiography
Energy Technology Data Exchange (ETDEWEB)
Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD
2011-01-01
Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.
Confinement, average forces, and the Ehrenfest theorem for a one ...
Indian Academy of Sciences (India)
Home; Journals; Pramana – Journal of Physics; Volume 80; Issue 5. Confinement ... A free particle moving on the entire real line, which is then permanently confined to a line segment or `a box' (this situation is achieved by taking the limit V 0 → ∞ in a finite well potential). This case is .... Please take note of this change.
Average weighted receiving time in recursive weighted Koch networks
Indian Academy of Sciences (India)
Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 6 ... Nonlinear Scientific Research Center, Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu, 212013, People's Republic of China; School of Computer Science and Telecommunication Engineering, Jiangsu University, Zhenjiang, 212013, People's ...
Exotic drugs and English medicine: England’s drug trade, c.1550-c.1800
Patrick Wallis
2010-01-01
What effect did the dramatic expansion in long distance trade in the early modern period have on healthcare in England? This paper presents new evidence on the scale, origins and content of English imports of medical drugs between 1567 and 1774. It shows that the volume of medical drugs imported exploded in the seventeenth century, and continued growing more gradually over the eighteenth century. The variety of drugs imported changed more slowly. Much was re-exported, but estimates of dosages...
Estimating average glandular dose by measuring glandular rate in mammograms
International Nuclear Information System (INIS)
Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru
2003-01-01
The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)
Accurate phenotyping: Reconciling approaches through Bayesian model averaging.
Directory of Open Access Journals (Sweden)
Carla Chia-Ming Chen
Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.
Yearly, seasonal and monthly daily average diffuse sky radiation models
International Nuclear Information System (INIS)
Kassem, A.S.; Mujahid, A.M.; Turner, D.W.
1993-01-01
A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs
Drug Safety: Managing Multiple Drugs
... This series is produced by Consumers Union and Consumer Reports Best Buy Drugs , a public information project sup- ported by grants from the Engelberg Foundation and the National Library of Medicine of ... Consumer and Prescriber Education Grant Program which is funded ...
Optimal bounds and extremal trajectories for time averages in dynamical systems
Tobasco, Ian; Goluskin, David; Doering, Charles
2017-11-01
For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.
Adolescence in New Zealand. Volume Two: Wider Perspectives.
Stewart, Robert A. C., Ed.
This is the second of a two-volume collection of research-based readings dealing with the New Zealand adolescent. This volume considers the areas of drugs and delinquency, as well as the world of work and Maori-pakeha differences. The following topics are included: marihuana use; vocational aspiration; alcohol and tobacco use; Maori-pakeha…
Average cross sections for the 252Cf neutron spectrum
International Nuclear Information System (INIS)
Dezso, Z.; Csikai, J.
1977-01-01
A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables
Testing averaged cosmology with type Ia supernovae and BAO data
Energy Technology Data Exchange (ETDEWEB)
Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)
2017-02-01
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.
Average contraction and synchronization of complex switched networks
International Nuclear Information System (INIS)
Wang Lei; Wang Qingguo
2012-01-01
This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)
The Health Effects of Income Inequality: Averages and Disparities.
Truesdale, Beth C; Jencks, Christopher
2016-01-01
Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.
Testing averaged cosmology with type Ia supernovae and BAO data
International Nuclear Information System (INIS)
Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani
2017-01-01
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.
Perceived Average Orientation Reflects Effective Gist of the Surface.
Cha, Oakyoon; Chong, Sang Chul
2018-03-01
The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.
Object detection by correlation coefficients using azimuthally averaged reference projections.
Nicholson, William V
2004-11-01
A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.
Measurement of average radon gas concentration at workplaces
International Nuclear Information System (INIS)
Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.
2003-01-01
In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)
A Martian PFS average spectrum: Comparison with ISO SWS
Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.
2005-08-01
The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.
Size and emotion averaging: costs of dividing attention after all.
Brand, John; Oriet, Chris; Tottenham, Laurie Sykes
2012-03-01
Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.
A virtual pebble game to ensemble average graph rigidity.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2015-01-01
The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most
Exactly averaged equations for flow and transport in random media
International Nuclear Information System (INIS)
Shvidler, Mark; Karasaki, Kenzi
2001-01-01
It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)
Increase in average foveal thickness after internal limiting membrane peeling
Directory of Open Access Journals (Sweden)
Kumagai K
2017-04-01
Full Text Available Kazuyuki Kumagai,1 Mariko Furukawa,1 Tetsuyuki Suetsugu,1 Nobuchika Ogino2 1Department of Ophthalmology, Kami-iida Daiichi General Hospital, 2Department of Ophthalmology, Nishigaki Eye Clinic, Aichi, Japan Purpose: To report the findings in three cases in which the average foveal thickness was increased after a thin epiretinal membrane (ERM was removed by vitrectomy with internal limiting membrane (ILM peeling.Methods: The foveal contour was normal preoperatively in all eyes. All cases underwent successful phacovitrectomy with ILM peeling for a thin ERM. The optical coherence tomography (OCT images were examined before and after the surgery. The changes in the average foveal (1 mm thickness and the foveal areas within 500 µm from the foveal center were measured. The postoperative changes in the inner and outer retinal areas determined from the cross-sectional OCT images were analyzed.Results: The average foveal thickness and the inner and outer foveal areas increased significantly after the surgery in each of the three cases. The percentage increase in the average foveal thickness relative to the baseline thickness was 26% in Case 1, 29% in Case 2, and 31% in Case 3. The percentage increase in the foveal inner retinal area was 71% in Case 1, 113% in Case 2, and 110% in Case 3, and the percentage increase in foveal outer retinal area was 8% in Case 1, 13% in Case 2, and 18% in Case 3.Conclusion: The increase in the average foveal thickness and the inner and outer foveal areas suggests that a centripetal movement of the inner and outer retinal layers toward the foveal center probably occurred due to the ILM peeling. Keywords: internal limiting membrane, optical coherence tomography, average foveal thickness, epiretinal membrane, vitrectomy
Legal Drugs Are Good Drugs And Illegal Drugs Are Bad Drugs
Directory of Open Access Journals (Sweden)
Dina Indrati
2011-07-01
Full Text Available ABSTRACT : Labelling drugs are important issue nowadays in a modern society. Although it is generally believed that legal drugs are good drugs and illegal drugs are bad drugs, it is evident that some people do not aware about the side effects of drugs used. Therefore, a key contention of this philosophical essay is that explores harms minimisation policy, discuss whether legal drugs are good drugs and illegal drugs are bad drugs and explores relation of drugs misuse in a psychiatric nursing setting and dual diagnosis.Key words: Legal, good drugs, illegal, bad drugs.
Positivity of the spherically averaged atomic one-electron density
DEFF Research Database (Denmark)
Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas
2008-01-01
We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥ 0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥ 0. This article may be reproduced in its entirety for non-commercial purposes....
Research & development and growth: A Bayesian model averaging analysis
Czech Academy of Sciences Publication Activity Database
Horváth, Roman
2011-01-01
Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf
MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL
Directory of Open Access Journals (Sweden)
V.S. Bochko
2006-09-01
Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.
High-Average, High-Peak Current Injector Design
Biedron, S G; Virgo, M
2005-01-01
There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.
Non-self-averaging nucleation rate due to quenched disorder
International Nuclear Information System (INIS)
Sear, Richard P
2012-01-01
We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)
A note on moving average models for Gaussian random fields
DEFF Research Database (Denmark)
Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.
The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...
Disc volume reduction with percutaneous nucleoplasty in an animal model.
Directory of Open Access Journals (Sweden)
Richard Kasch
Full Text Available STUDY DESIGN: We assessed volume following nucleoplasty disc decompression in lower lumbar spines from cadaveric pigs using 7.1Tesla magnetic resonance imaging (MRI. PURPOSE: To investigate coblation-induced volume reductions as a possible mechanism underlying nucleoplasty. METHODS: We assessed volume following nucleoplastic disc decompression in pig spines using 7.1-Tesla MRI. Volumetry was performed in lumbar discs of 21 postmortem pigs. A preoperative image data set was obtained, volume was determined, and either disc decompression or placebo therapy was performed in a randomized manner. Group 1 (nucleoplasty group was treated according to the usual nucleoplasty protocol with coblation current applied to 6 channels for 10 seconds each in an application field of 360°; in group 2 (placebo group the same procedure was performed but without coblation current. After the procedure, a second data set was generated and volumes calculated and matched with the preoperative measurements in a blinded manner. To analyze the effectiveness of nucleoplasty, volumes between treatment and placebo groups were compared. RESULTS: The average preoperative nucleus volume was 0.994 ml (SD: 0.298 ml. In the nucleoplasty group (n = 21 volume was reduced by an average of 0.087 ml (SD: 0.110 ml or 7.14%. In the placebo group (n = 21 volume was increased by an average of 0.075 ml (SD: 0.075 ml or 8.94%. The average nucleoplasty-induced volume reduction was 0.162 ml (SD: 0.124 ml or 16.08%. Volume reduction in lumbar discs was significant in favor of the nucleoplasty group (p<0.0001. CONCLUSIONS: Our study demonstrates that nucleoplasty has a volume-reducing effect on the lumbar nucleus pulposus in an animal model. Furthermore, we show the volume reduction to be a coblation effect of nucleoplasty in porcine discs.
Anderson, Gail D
2006-12-01
Knowledge of pharmacokinetics and the use of a mechanistic-based approach can improve our ability to predict the effects of pregnancy for medications when data are limited. Despite the many physiological changes that occur during pregnancy that could theoretically affect absorption, bioavailability does not appear to be altered. Decreased albumin and alpha(1)-acid glycoprotein concentrations during pregnancy will result in decreased protein binding for highly bound drugs. For drugs metabolised by the liver, this can result in misinterpretation of total plasma concentrations of low extraction ratio drugs and overdosing of high extraction ratio drugs administered by non-oral routes. Renal clearance and the activity of the CYP isozymes, CYP3A4, 2D6 and 2C9, and uridine 5'-diphosphate glucuronosyltransferase are increased during pregnancy. In contrast, CYP1A2 and 2C19 activity is decreased. The dose of a drug an infant receives during breastfeeding is dependent on the amount excreted into the breast milk, the daily volume of milk ingested and the average plasma concentration of the mother. The lipophilicity, protein binding and ionisation properties of a drug will determine how much is excreted into the breast milk. The milk to plasma concentration ratio has large inter- and intrasubject variability and is often not known. In contrast, protein binding is usually known. An extensive literature review was done to identify case reports including infant concentrations from breast-fed infants exposed to maternal drugs. For drugs that were at least 85% protein bound, measurable concentrations of drug in the infant did not occur if there was no placental exposure immediately prior to or during delivery. Knowledge of the protein binding properties of a drug can provide a quick and easy tool to estimate exposure of an infant to medication from breastfeeding.
Evaluation of pharyngeal volume and compliance of OSAHS patients using 3D CT and volume measurement
International Nuclear Information System (INIS)
Lan, Zhijie
2004-01-01
The intrinsic properties such as baseline caliber and compliance of the upper airway are thought to be important in the pathogenesis of obstructive sleep apnea hypopnea syndrome (OSAHS).The author attempted using imaging methods to evaluate both baseline caliber and compliance in normal individuals and OSAHS patients, and to localize the obstructive sites in OSAHS patients. Critical closing pressure (P crit ) and minimally effective therapeutical pressure (P eff ) were measured and computed tomography (CT) scan of pharynx was performed during wakefulness and drug-induced sleep with P crit , 0 cm H 2 O and P eff being given through a nose mask system. 7 normal individuals (age, 32.2±6.5 y's and body mass index, 23.6±5.4 kg/m 2 ) and 13 OSAHS patients (age, 33.3±6.4 y's and body mass index, 25.9±6.0 kg/m 2 ) were studied. 3D images of pharyngeal airway were reconstructed, and volume of each subdivision of pharynx was measured. Volume, average area and compliance of each subdivision were compared between the two groups. On an air-mode view of 3D image, the outline of pharynx was shown as transparent tubal structure, on which the narrowing collapse of airway at any level or any direction can be easily identified. Anatomy of pharynx could be easily understood on the virtual endoscopic mode. During wakefulness, the average area of the upper (1.20±0.26 cm 2 vs. 1.57±0.17 cm 2 , P 2 vs. 2.58±0.27 cm 2 , P 2 vs. 1.45±0.18 cm 2 , P 2 vs. 2.44±0.26 cm 2 ). The compliance of the middle part (0.28±0.15/cmH 2 O vs. 0.13±0.07/cmH 2 O, P<0.05) of pharynx was significantly higher in OSAHS patients than in normal individuals. The data suggested that OSAHS patients have a narrower and more collapsible pharynx compared to the normal subjects. The method of the present study is valid to evaluate both morphology and function of the upper airway. (author)
The volume of the human knee joint.
Matziolis, Georg; Roehner, Eric; Windisch, Christoph; Wagner, Andreas
2015-10-01
Despite its clinical relevance, particularly in septic knee surgery, the volume of the human knee joint has not been established to date. Therefore, the objective of this study was to determine knee joint volume and whether or not it is dependent on sex or body height. Sixty-one consecutive patients (joints) who were due to undergo endoprosthetic joint replacement were enrolled in this prospective study. During the operation, the joint volume was determined by injecting saline solution until a pressure of 200 mmHg was achieved in the joint. The average volume of all knee joints was 131 ± 53 (40-290) ml. The volume was not found to be dependent on sex, but it was dependent on the patients' height (R = 0.312, p = 0.014). This enabled an estimation of the joint volume according to V = 1.6 height - 135. The considerable inter-individual variance of the knee joint volume would suggest that it should be determined or at least estimated according to body height if the joint volume has consequences for the diagnostics or therapy of knee disorders.
Pricing and reimbursement of drugs in Ireland.
Barry, Michael; Tilson, Lesley; Ryan, Máirín
2004-06-01
Expenditure on healthcare in Ireland, which is mainly derived from taxation, has increased considerably in recent years to an estimated 9.2 billion euro in 2003. Pharmaceuticals account for approximately 10% of total healthcare expenditure. Approximately one-third of patients receive their medications free of charge whilst the remaining two-thirds are subject to a co-payment threshold of 78 euro per month, i.e. 936 euro per year. The price of medications in Ireland is linked to those of five other member states where the price to the wholesaler of any medication will not exceed the lesser of the currency-adjusted wholesale price in the United Kingdom or the average of wholesale prices in Denmark, France, Germany, The Netherlands and the United Kingdom. A price freeze at the introduction price has been in existence since 1993. Despite the price freeze, expenditure on medicines on the community drugs scheme has increased from 201 million euro in 1993 to 898 million euro in 2002. The two main factors contributing to the increased expenditure on medicines include "product mix", the prescribing of new and more expensive medication, and "volume effect" comprising growth in the number of prescription items. Changing demographics and the extension of the General Medical Services (GMS) Scheme to provide free medicines for all those over the age of 70 years have also contributed. Prior to reimbursement under the community drugs schemes, a medicine must be included in the GMS code book or positive list. A demonstration of cost-effectiveness is not a pre-requisite for reimbursement.
Small Bandwidth Asymptotics for Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...
High Average Power UV Free Electron Laser Experiments At JLAB
International Nuclear Information System (INIS)
Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn
2012-01-01
Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.
Average subentropy, coherence and entanglement of random mixed quantum states
Energy Technology Data Exchange (ETDEWEB)
Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)
2017-02-15
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
Establishment of Average Body Measurement and the Development ...
African Journals Online (AJOL)
cce
body measurement for height and backneck to waist for ages 2,3,4 and 5 years. The ... average measurements of the different parts of the body must be established. ..... and OAU Charter on Rights of the child: Lagos: Nigeria Country office.
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
Determination of the average lifetime of bottom hadrons
Energy Technology Data Exchange (ETDEWEB)
Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W [Technische Hochschule Aachen (Germany, F.R.). Lehrstuhl fuer Experimentalphysik 1A und 1. Physikalisches Inst.; Eisenmann, J; Fischer, H M
1984-12-27
We have determined the average lifetime of hadrons containing b quarks produced in e/sup +/e/sup -/ annihilation to be tausub(B)=1.83 x 10/sup -12/ s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes.
Determination of the average lifetime of bottom hadrons
Energy Technology Data Exchange (ETDEWEB)
Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W; Eisenmann, J; Fischer, H M
1984-12-27
We have determined the average lifetime of hadrons containing b quarks produced in e e annihilation to be tausub(B)=1.83x10 S s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI).
Time Series ARIMA Models of Undergraduate Grade Point Average.
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Crystallographic extraction and averaging of data from small image areas
Perkins, GA; Downing, KH; Glaeser, RM
The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that
Reducing Noise by Repetition: Introduction to Signal Averaging
Hassan, Umer; Anwar, Muhammad Sabieh
2010-01-01
This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…
Environmental stresses can alleviate the average deleterious effect of mutations
Directory of Open Access Journals (Sweden)
Leibler Stanislas
2003-05-01
Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.
The background effective average action approach to quantum gravity
DEFF Research Database (Denmark)
D’Odorico, G.; Codello, A.; Pagani, C.
2016-01-01
of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....
Error estimates in horocycle averages asymptotics: challenges from string theory
Cardella, M.A.
2010-01-01
For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth
Moving average rules as a source of market instability
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets
arXiv Averaged Energy Conditions and Bouncing Universes
Giovannini, Massimo
2017-11-16
The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.
26 CFR 1.1301-1 - Averaging of farm income.
2010-04-01
... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... the Collection of Income Tax at Source on Wages (Federal income tax withholding), or the amount of net... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1...
Implications of Methodist clergies' average lifespan and missional ...
African Journals Online (AJOL)
2015-06-09
Jun 9, 2015 ... The author of Genesis 5 paid meticulous attention to the lifespan of several people ... of Southern Africa (MCSA), and to argue that memories of the ... average ages at death were added up and the sum was divided by 12 (which represents the 12 ..... not explicit in how the departed Methodist ministers were.
Pareto Principle in Datamining: an Above-Average Fencing Algorithm
Directory of Open Access Journals (Sweden)
K. Macek
2008-01-01
Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.
Average Distance Travelled To School by Primary and Secondary ...
African Journals Online (AJOL)
This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...
Trend of Average Wages as Indicator of Hypothetical Money Illusion
Directory of Open Access Journals (Sweden)
Julian Daszkowski
2010-06-01
Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.
Computation of the average energy for LXY electrons
International Nuclear Information System (INIS)
Grau Carles, A.; Grau, A.
1996-01-01
The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs
Bounding quantum gate error rate based on reported average fidelity
International Nuclear Information System (INIS)
Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C
2016-01-01
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)
75 FR 78157 - Farmer and Fisherman Income Averaging
2010-12-15
... to the averaging of farm and fishing income in computing income tax liability. The regulations...: PART 1--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as... section 1 tax would be increased if one-third of elected farm income were allocated to each year. The...
Domain-averaged Fermi-hole Analysis for Solids
Czech Academy of Sciences Publication Activity Database
Baranov, A.; Ponec, Robert; Kohout, M.
2012-01-01
Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012
Characteristics of phase-averaged equations for modulated wave groups
Klopman, G.; Petit, H.A.H.; Battjes, J.A.
2000-01-01
The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).
A depth semi-averaged model for coastal dynamics
Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.
2017-05-01
The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.
An averaged polarizable potential for multiscale modeling in phospholipid membranes
DEFF Research Database (Denmark)
Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard
2017-01-01
A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl...
Understanding coastal morphodynamic patterns from depth-averaged sediment concentration
Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.
This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand
Post-model selection inference and model averaging
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2011-07-01
Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.
Determination of average activating thermal neutron flux in bulk samples
International Nuclear Information System (INIS)
Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.
2004-01-01
A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
Grade Point Average: What's Wrong and What's the Alternative?
Soh, Kay Cheng
2011-01-01
Grade point average (GPA) has been around for more than two centuries. However, it has created a lot of confusion, frustration, and anxiety to GPA-producers and users alike, especially when used across-nation for different purposes. This paper looks into the reasons for such a state of affairs from the perspective of educational measurement. It…
The Effect of Honors Courses on Grade Point Averages
Spisak, Art L.; Squires, Suzanne Carter
2016-01-01
High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…
40 CFR 63.652 - Emissions averaging provisions.
2010-07-01
... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...
DRUG INTERACTIONS WITH DIAZEPAM
Directory of Open Access Journals (Sweden)
Zoran Bojanić
2011-06-01
Full Text Available Diazepam is a benzodiazepine derivative with anxyolitic, anticonvulsant, hypnotic, sedative, skeletal muscle relaxant, antitremor, and amnestic activity. It is metabolized in the liver by the cytochrome P (CYP 450 enzyme system. Diazepam is N-demethylated by CYP3A4 and CYP2C19 to the active metabolite N-desmethyldiazepam, and is hydroxylated by CYP3A4 to the active metabolite temazepam. N-desmethyl-diazepam and temazepam are both further metabolized to oxazepam. Concomitant intake of inhibitors or inducers of the CYP isozymes involved in the biotransformation of diazepam may alter plasma concentrations of this drug, although this effect is unlikely to be associated with clinically relevant interactions.The goal of this article was to review the current literature on clinically relevant pharmacokinetic drug interactions with diazepam.A search of MEDLINE and EMBASE was conducted for original research and review articles published in English between January 1971. and May 2011. Among the search terms were drug interactions, diazepam, pharmacokinetics, drug metabolism, and cytochrome P450. Only articles published in peer-reviewed journals were included, and meeting abstracts were excluded. The reference lists of relevant articles were hand-searched for additional publications.Diazepam is substantially sorbed by the plastics in flexible containers, volume control set chambers, and tubings of intravenous administration sets. Manufacturers recommend not mixing with any other drug or solution in syringe or solution, although diazepam is compatible in syringe with cimetidine and ranitidine, and in Y-site with cisatracurium, dobutamine, fentanyl, hydromorphone, methadone, morphine, nafcillin, quinidine gluconate, remifentanil, and sufentanil. Diazepam is compatible with: dextrose 5% in water, Ringers injection, Ringers injection lactated and sodium chloride 0.9%. Emulsified diazepam is compatible with Intralipid and Nutralipid.Diazepam has low potential
International Nuclear Information System (INIS)
Sutherland, A.A.; Adam, J.A.; Rogers, V.C.; Merrell, G.B.
1984-11-01
Volume 4 establishes pricing levels at new shallow land burial grounds. The following conclusions can be drawn from the analyses described in the preceding chapters: Application of volume reduction techniques by utilities can have a significant impact on the volumes of wastes going to low-level radioactive waste disposal sites. Using the relative waste stream volumes in NRC81 and the maximum volume reduction ratios provided by Burns and Roe, Inc., it was calculated that if all utilities use maximum volume reduction the rate of waste receipt at disposal sites will be reduced by 40 percent. When a disposal site receives a lower volume of waste its total cost of operation does not decrease by the same proportion. Therefore the average cost for a unit volume of waste received goes up. Whether the disposal site operator knows in advance that he will receive a smaller amount of waste has little influence on the average unit cost ($/ft) of the waste disposed. For the pricing algorithm postulated, the average disposal cost to utilities that volume reduce is relatively independent of whether all utilities practice volume reduction or only a few volume reduce. The general effect of volume reduction by utilities is to reduce their average disposal site costs by a factor of between 1.5 to 2.5. This factor is generally independent of the size of the disposal site. The largest absolute savings in disposal site costs when utilities volume reduce occurs when small disposal sites are involved. This results from the fact that unit costs are higher at small sites. Including in the pricing algorithm a factor that penalizes waste generators who contribute larger amounts of the mobile nuclides 3 H, 14 C, 99 Tc, and 129 I, which may be the subject of site inventory limits, lowers unit disposal costs for utility wastes that contain only small amounts of the nuclides and raises unit costs for other utility wastes
An average salary: approaches to the index determination
Directory of Open Access Journals (Sweden)
T. M. Pozdnyakova
2017-01-01
Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of
Volume of the domain visited by N spherical Brownian particles
International Nuclear Information System (INIS)
Berezhkovskii, A.M.
1994-01-01
The average value and variance of the volume of the domain visited in time t by N spherical Brownian particles starting initially at the same point are presented as quadratures of the solutions of simple diffusion problems of the survival of a point Brownian particle in the presence of one and two spherical traps. As an illustration, explicit time dependences are obtained for the average volume in one and three dimensions
High-average-power diode-pumped Yb: YAG lasers
International Nuclear Information System (INIS)
Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B
1999-01-01
A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods
High average power diode pumped solid state lasers for CALIOPE
International Nuclear Information System (INIS)
Comaskey, B.; Halpin, J.; Moran, B.
1994-07-01
Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers
Construction of average adult Japanese voxel phantoms for dose assessment
International Nuclear Information System (INIS)
Sato, Kaoru; Takahashi, Fumiaki; Satoh, Daiki; Endo, Akira
2011-12-01
The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)
Erfanian, A.; Fomenko, L.; Wang, G.
2016-12-01
Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling
Energy Technology Data Exchange (ETDEWEB)
Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)
2011-04-07
The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.
DEFF Research Database (Denmark)
Mogensen, O.; Sørensen, Flemming Brandt; Bichel, P.
1999-01-01
We evaluated the following nine parameters with respect to their prognostic value in females with endometrial cancer: four stereologic parameters [mean nuclear volume (MNV), nuclear volume fraction, nuclear index and mitotic index], the immunohistochemical expression of cancer antigen (CA125...
On the construction of a time base and the elimination of averaging errors in proxy records
Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.
2009-04-01
Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The
SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging
International Nuclear Information System (INIS)
Diamant, A; Ybarra, N; Seuntjens, J; El Naqa, I
2016-01-01
Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigated a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible
SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging
Energy Technology Data Exchange (ETDEWEB)
Diamant, A; Ybarra, N; Seuntjens, J [McGill University, Montreal, Quebec (Canada); El Naqa, I [University of Michigan, Ann Arbor, MI (United States)
2016-06-15
Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigated a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible
International Nuclear Information System (INIS)
Neau, E.L.
1994-01-01
Short-pulse accelerator technology developed during the early 1960's through the late 1980's is being extended to high average power systems capable of use in industrial and environmental applications. Processes requiring high dose levels and/or high volume throughput will require systems with beam power levels from several hundreds of kilowatts to megawatts. Beam accelerating potentials can range from less than 1 MeV to as much as 10 MeV depending on the type of beam, depth of penetration required, and the density of the product being treated. This paper addresses the present status of a family of high average power systems, with output beam power levels up to 200 kW, now in operation that use saturable core switches to achieve output pulse widths of 50 to 80 nanoseconds. Inductive adders and field emission cathodes are used to generate beams of electrons or x-rays at up to 2.5 MeV over areas of 1000 cm 2 . Similar high average power technology is being used at ≤ 1 MeV to drive repetitive ion beam sources for treatment of material surfaces over 100's of cm 2
International Nuclear Information System (INIS)
Lewis, S.M.; Yin, J.A.L.
1986-01-01
The use of dilution analysis with such radioisotopes as 51 Cr, 32 P, sup(99m)Tc and sup(113m)In for measuring red cell volume is reviewed briefly. The use of 125 I and 131 I for plasma volume studies is also considered and the subsequent determination of total blood volume discussed, together with the role of the splenic red cell volume. Substantial bibliography. (UK)
THE PREDICTION OF VOID VOLUME IN SUBCOOLED NUCLEATE POOL BOILING
Energy Technology Data Exchange (ETDEWEB)
Duke, E. E. [General Dynamics, San Diego, CA (United States)
1963-11-15
A three- step equation was developed that adequately describes the average volume of vapor occurring on a horizontal surface due to nucleate pool boiling of subcooled water. Since extensive bubble frequency data are lacking, the data of others were combined with experimental observations to make predictions of void volume at ambient pressure with various degrees of subcooling. (auth)
Using Mobile Device Samples to Estimate Traffic Volumes
2017-12-01
In this project, TTI worked with StreetLight Data to evaluate a beta version of its traffic volume estimates derived from global positioning system (GPS)-based mobile devices. TTI evaluated the accuracy of average annual daily traffic (AADT) volume :...
The objective of this study was to determine the association of differentially expressed genes (DEG) in the jejunum of steers with average DMI and high or low ADG. Feed intake and growth were measured in a cohort of 144 commercial Angus steers consuming a finishing diet containing (on a DM basis) 67...
van Wee, B.; Rietveld, P.; Meurs, H.
2006-01-01
Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably