WorldWideScience

Sample records for accurately estimate excess

  1. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  2. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  3. Accurate hydrocarbon estimates attained with radioactive isotope

    International Nuclear Information System (INIS)

    Hubbard, G.

    1983-01-01

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  4. Multiscale estimation of excess mass from gravity data

    Science.gov (United States)

    Castaldo, Raffaele; Fedi, Maurizio; Florio, Giovanni

    2014-06-01

    We describe a multiscale method to estimate the excess mass of gravity anomaly sources, based on the theory of source moments. Using a multipole expansion of the potential field and considering only the data along the vertical direction, a system of linear equations is obtained. The choice of inverting data along a vertical profile can help us to reduce the interference effects due to nearby anomalies and will allow a local estimate of the source parameters. A criterion is established allowing the selection of the optimal highest altitude of the vertical profile data and truncation order of the series expansion. The inversion provides an estimate of the total anomalous mass and of the depth to the centre of mass. The method has several advantages with respect to classical methods, such as the Gauss' method: (i) we need just a 1-D inversion to obtain our estimates, being the inverted data sampled along a single vertical profile; (ii) the resolution may be straightforward enhanced by using vertical derivatives; (iii) the centre of mass is also estimated, besides the excess mass; (iv) the method is very robust versus noise; (v) the profile may be chosen in such a way to minimize the effects from interfering anomalies or from side effects due to the a limited area extension. The multiscale estimation of excess mass method can be successfully used in various fields of application. Here, we analyse the gravity anomaly generated by a sulphide body in the Skelleftea ore district, North Sweden, obtaining source mass and volume estimates in agreement with the known information. We show also that these estimates are substantially improved with respect to those obtained with the classical approach.

  5. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  6. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  7. Zadoff-Chu coded ultrasonic signal for accurate range estimation

    KAUST Repository

    AlSharif, Mohammed H.

    2017-11-02

    This paper presents a new adaptation of Zadoff-Chu sequences for the purpose of range estimation and movement tracking. The proposed method uses Zadoff-Chu sequences utilizing a wideband ultrasonic signal to estimate the range between two devices with very high accuracy and high update rate. This range estimation method is based on time of flight (TOF) estimation using cyclic cross correlation. The system was experimentally evaluated under different noise levels and multi-user interference scenarios. For a single user, the results show less than 7 mm error for 90% of range estimates in a typical indoor environment. Under the interference from three other users, the 90% error was less than 25 mm. The system provides high estimation update rate allowing accurate tracking of objects moving with high speed.

  8. Zadoff-Chu coded ultrasonic signal for accurate range estimation

    KAUST Repository

    AlSharif, Mohammed H.; Saad, Mohamed; Siala, Mohamed; Ballal, Tarig; Boujemaa, Hatem; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper presents a new adaptation of Zadoff-Chu sequences for the purpose of range estimation and movement tracking. The proposed method uses Zadoff-Chu sequences utilizing a wideband ultrasonic signal to estimate the range between two devices with very high accuracy and high update rate. This range estimation method is based on time of flight (TOF) estimation using cyclic cross correlation. The system was experimentally evaluated under different noise levels and multi-user interference scenarios. For a single user, the results show less than 7 mm error for 90% of range estimates in a typical indoor environment. Under the interference from three other users, the 90% error was less than 25 mm. The system provides high estimation update rate allowing accurate tracking of objects moving with high speed.

  9. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  10. Toward accurate and precise estimates of lion density.

    Science.gov (United States)

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  11. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  12. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  13. Estimating the burden of disease attributable to excess body weight ...

    African Journals Online (AJOL)

    Monte Carlo simulation-modelling techniques were used for the uncertainty analysis. ... Deaths and disability-adjusted life years (DALYs) from ischaemic heart disease, ... lasting change in the determinants and impact of excess body weight.

  14. Analytical estimation of control rod shadowing effect for excess reactivity measurement of HTTR

    International Nuclear Information System (INIS)

    Nakano, Masaaki; Fujimoto, Nozomu; Yamashita, Kiyonobu

    1999-01-01

    The fuel addition method is generally used for the excess reactivity measurement of the initial core. The control rod shadowing effect for the excess reactivity measurement has been estimated analytically for High Temperature Engineering Test Reactor (HTTR). 3-dimensional whole core analyses were carried out. The movements of control rods in measurements were simulated in the calculation. It was made clear that the value of excess reactivity strongly depend on combinations of measuring control rods and compensating control rods. The differences in excess reactivity between combinations come from the control rod shadowing effect. The shadowing effect is reduced by the use of plural number of measuring and compensating control rods to prevent deep insertion of them into the core. The measured excess reactivity in the experiments is, however, smaller than the estimated value with shadowing effect. (author)

  15. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  16. Different techniques of excess 210Pb for sedimentation rate estimation in the Sarawak and Sabah coastal waters

    International Nuclear Information System (INIS)

    Zal Uyun Wan Mahmood; Zaharudin Ahmad; Abdul Kadir Ishak; Che Abdul Rahim Mohamed

    2010-01-01

    Sediment core samples were collected at eight stations in the Sarawak and Sabah coastal waters using a gravity box corer to estimate sedimentation rates based on the activity of excess 210 Pb. The sedimentation rates derived from four mathematical models of CIC, Shukla-CIC, CRS and ADE were generally shown in good agreement with similar or comparable value at all stations. However, based on statistical analysis of independent sample t-test indicated that Shukla-CIC model was the most accurate, reliable and suitable technique to determine the sedimentation rate in the study area. (author)

  17. [Estimation of the excess of lung cancer mortality risk associated to environmental tobacco smoke exposure of hospitality workers].

    Science.gov (United States)

    López, M José; Nebot, Manel; Juárez, Olga; Ariza, Carles; Salles, Joan; Serrahima, Eulàlia

    2006-01-14

    To estimate the excess lung cancer mortality risk associated with environmental tobacco (ETS) smoke exposure among hospitality workers. The estimation was done using objective measures in several hospitality settings in Barcelona. Vapour phase nicotine was measured in several hospitality settings. These measurements were used to estimate the excess lung cancer mortality risk associated with ETS exposure for a 40 year working life, using the formula developed by Repace and Lowrey. Excess lung cancer mortality risk associated with ETS exposure was higher than 145 deaths per 100,000 workers in all places studied, except for cafeterias in hospitals, where excess lung cancer mortality risk was 22 per 100,000. In discoteques, for comparison, excess lung cancer mortality risk is 1,733 deaths per 100,000 workers. Hospitality workers are exposed to ETS levels related to a very high excess lung cancer mortality risk. These data confirm that ETS control measures are needed to protect hospital workers.

  18. Parental and Child Factors Associated with Under-Estimation of Children with Excess Weight in Spain.

    Science.gov (United States)

    de Ruiter, Ingrid; Olmedo-Requena, Rocío; Jiménez-Moleón, José Juan

    2017-11-01

    Objective Understanding obesity misperception and associated factors can improve strategies to increase obesity identification and intervention. We investigate underestimation of child excess weight with a broader perspective, incorporating perceptions, views, and psychosocial aspects associated with obesity. Methods This study used cross-sectional data from the Spanish National Health Survey in 2011-2012 for children aged 2-14 years who are overweight or obese. Percentages of parental misperceived excess weight were calculated. Crude and adjusted analyses were performed for both child and parental factors analyzing associations with underestimation. Results Two-five year olds have the highest prevalence of misperceived overweight or obesity around 90%. In the 10-14 year old age group approximately 63% of overweight teens were misperceived as normal weight and 35.7 and 40% of obese males and females. Child gender did not affect underestimation, whereas a younger age did. Aspects of child social and mental health were associated with under-estimation, as was short sleep duration. Exercise, weekend TV and videogames, and food habits had no effect on underestimation. Fathers were more likely to misperceive their child´s weight status; however parent's age had no effect. Smokers and parents with excess weight were less likely to misperceive their child´s weight status. Parents being on a diet also decreased odds of underestimation. Conclusions for practice This study identifies some characteristics of both parents and children which are associated with under-estimation of child excess weight. These characteristics can be used for consideration in primary care, prevention strategies and for further research.

  19. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...

  20. Excess cases of prostate cancer and estimated overdiagnosis associated with PSA testing in East Anglia

    Science.gov (United States)

    Pashayan, N; Powles, J; Brown, C; Duffy, S W

    2006-01-01

    This study aimed to estimate the extent of ‘overdiagnosis' of prostate cancer attributable to prostate-specific antigen (PSA) testing in the Cambridge area between 1996 and 2002. Overdiagnosis was defined conceptually as detection of prostate cancer through PSA testing that otherwise would not have been diagnosed within the patient's lifetime. Records of PSA tests in Addenbrookes Hospital were linked to prostate cancer registrations by NHS number. Differences in prostate cancer registration rates between those receiving and not receiving prediagnosis PSA tests were calculated. The proportion of men aged 40 years or over with a prediagnosis PSA test increased from 1.4 to 5.2% from 1996 to 2002. The rate of diagnosis of prostate cancer was 45% higher (rate ratios (RR)=1.45, 95% confidence intervals (CI) 1.02–2.07) in men with a history of prediagnosis PSA testing. Assuming average lead times of 5 to 10 years, 40–64% of the PSA-detected cases were estimated to be overdiagnosed. In East Anglia, from 1996 to 2000, a 1.6% excess of cases was associated with PSA testing (around a quarter of the 5.3% excess incidence cases observed in East Anglia from 1996 to 2000). Further quantification of the overdiagnosis will result from continued surveillance and from linkage of incidence to testing in other hospitals. PMID:16832417

  1. The description of a method for accurately estimating creatinine clearance in acute kidney injury.

    Science.gov (United States)

    Mellas, John

    2016-05-01

    Acute kidney injury (AKI) is a common and serious condition encountered in hospitalized patients. The severity of kidney injury is defined by the RIFLE, AKIN, and KDIGO criteria which attempt to establish the degree of renal impairment. The KDIGO guidelines state that the creatinine clearance should be measured whenever possible in AKI and that the serum creatinine concentration and creatinine clearance remain the best clinical indicators of renal function. Neither the RIFLE, AKIN, nor KDIGO criteria estimate actual creatinine clearance. Furthermore there are no accepted methods for accurately estimating creatinine clearance (K) in AKI. The present study describes a unique method for estimating K in AKI using urine creatinine excretion over an established time interval (E), an estimate of creatinine production over the same time interval (P), and the estimated static glomerular filtration rate (sGFR), at time zero, utilizing the CKD-EPI formula. Using these variables estimated creatinine clearance (Ke)=E/P * sGFR. The method was tested for validity using simulated patients where actual creatinine clearance (Ka) was compared to Ke in several patients, both male and female, and of various ages, body weights, and degrees of renal impairment. These measurements were made at several serum creatinine concentrations in an attempt to determine the accuracy of this method in the non-steady state. In addition E/P and Ke was calculated in hospitalized patients, with AKI, and seen in nephrology consultation by the author. In these patients the accuracy of the method was determined by looking at the following metrics; E/P>1, E/P1 and 0.907 (0.841, 0.973) for 0.95 ml/min accurately predicted the ability to terminate renal replacement therapy in AKI. Include the need to measure urine volume accurately. Furthermore the precision of the method requires accurate estimates of sGFR, while a reasonable measure of P is crucial to estimating Ke. The present study provides the

  2. An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation

    Directory of Open Access Journals (Sweden)

    Senthil Kumar Murugesan

    2015-01-01

    Full Text Available Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.

  3. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-03

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.

  4. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    Science.gov (United States)

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of

  5. Accurate Fuel Estimates using CAN Bus Data and 3D Maps

    DEFF Research Database (Denmark)

    Andersen, Ove; Torp, Kristian

    2018-01-01

    The focus on reducing CO 2 emissions from the transport sector is larger than ever. Increasingly stricter reductions on fuel consumption and emissions are being introduced by the EU, e.g., to reduce the air pollution in many larger cities. Large sets of high-frequent GPS data from vehicles already...... the accuracy of fuel consumption estimates with up to 40% on hilly roads. There is only very little improvement of the high-precision (H3D) map over the simple 3D map. The fuel consumption estimates are most accurate on flat terrain with average fuel estimates of up to 99% accuracy. The fuel estimates are most...... exist. However, fuel consumption data is still rarely collected even though it is possible to measure the fuel consumption with high accuracy, e.g., using an OBD-II device and a smartphone. This paper, presents a method for comparing fuel-consumption estimates using the SIDRA TRIP model with real fuel...

  6. Eddy covariance observations of methane and nitrous oxide emissions. Towards more accurate estimates from ecosystems

    International Nuclear Information System (INIS)

    Kroon, P.S.

    2010-09-01

    About 30% of the increased greenhouse gas (GHG) emissions of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) are related to land use changes and agricultural activities. In order to select effective measures, knowledge is required about GHG emissions from these ecosystems and how these emissions are influenced by management and meteorological conditions. Accurate emission values are therefore needed for all three GHGs to compile the full GHG balance. However, the current annual estimates of CH4 and N2O emissions from ecosystems have significant uncertainties, even larger than 50%. The present study showed that an advanced technique, micrometeorological eddy covariance flux technique, could obtain more accurate estimates with uncertainties even smaller than 10%. The current regional and global trace gas flux estimates of CH4 and N2O are possibly seriously underestimated due to incorrect measurement procedures. Accurate measurements of both gases are really important since they could even contribute for more than two-third to the total GHG emission. For example: the total GHG emission of a dairy farm site was estimated at 16.10 3 kg ha -1 yr -1 in CO2-equivalents from which 25% and 45% was contributed by CH4 and N2O, respectively. About 60% of the CH4 emission was emitted by ditches and their bordering edges. These emissions are not yet included in the national inventory reports. We recommend including these emissions in coming reports.

  7. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    Science.gov (United States)

    Hwang, Beomsoo; Jeon, Doyoung

    2015-04-09

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  8. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Directory of Open Access Journals (Sweden)

    Beomsoo Hwang

    2015-04-01

    Full Text Available In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  9. Serial fusion of Eulerian and Lagrangian approaches for accurate heart-rate estimation using face videos.

    Science.gov (United States)

    Gupta, Puneet; Bhowmick, Brojeshwar; Pal, Arpan

    2017-07-01

    Camera-equipped devices are ubiquitous and proliferating in the day-to-day life. Accurate heart rate (HR) estimation from the face videos acquired from the low cost cameras in a non-contact manner, can be used in many real-world scenarios and hence, require rigorous exploration. This paper has presented an accurate and near real-time HR estimation system using these face videos. It is based on the phenomenon that the color and motion variations in the face video are closely related to the heart beat. The variations also contain the noise due to facial expressions, respiration, eye blinking and environmental factors which are handled by the proposed system. Neither Eulerian nor Lagrangian temporal signals can provide accurate HR in all the cases. The cases where Eulerian temporal signals perform spuriously are determined using a novel poorness measure and then both the Eulerian and Lagrangian temporal signals are employed for better HR estimation. Such a fusion is referred as serial fusion. Experimental results reveal that the error introduced in the proposed algorithm is 1.8±3.6 which is significantly lower than the existing well known systems.

  10. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    Science.gov (United States)

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  11. Estimation and evaluation of COSMIC radio occultation excess phase using undifferenced measurements

    Science.gov (United States)

    Xia, Pengfei; Ye, Shirong; Jiang, Kecai; Chen, Dezhong

    2017-05-01

    In the GPS radio occultation technique, the atmospheric excess phase (AEP) can be used to derive the refractivity, which is an important quantity in numerical weather prediction. The AEP is conventionally estimated based on GPS double-difference or single-difference techniques. These two techniques, however, rely on the reference data in the data processing, increasing the complexity of computation. In this study, an undifferenced (ND) processing strategy is proposed to estimate the AEP. To begin with, we use PANDA (Positioning and Navigation Data Analyst) software to perform the precise orbit determination (POD) for the purpose of acquiring the position and velocity of the mass centre of the COSMIC (The Constellation Observing System for Meteorology, Ionosphere and Climate) satellites and the corresponding receiver clock offset. The bending angles, refractivity and dry temperature profiles are derived from the estimated AEP using Radio Occultation Processing Package (ROPP) software. The ND method is validated by the COSMIC products in typical rising and setting occultation events. Results indicate that rms (root mean square) errors of relative refractivity differences between undifferenced and atmospheric profiles (atmPrf) provided by UCAR/CDAAC (University Corporation for Atmospheric Research/COSMIC Data Analysis and Archive Centre) are better than 4 and 3 % in rising and setting occultation events respectively. In addition, we also compare the relative refractivity bias between ND-derived methods and atmPrf profiles of globally distributed 200 COSMIC occultation events on 12 December 2013. The statistical results indicate that the average rms relative refractivity deviation between ND-derived and COSMIC profiles is better than 2 % in the rising occultation event and better than 1.7 % in the setting occultation event. Moreover, the observed COSMIC refractivity profiles from ND processing strategy are further validated using European Centre for Medium

  12. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  13. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    Science.gov (United States)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  14. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B; Wahl, R L

    2011-01-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  15. Rapid and accurate species tree estimation for phylogeographic investigations using replicated subsampling.

    Science.gov (United States)

    Hird, Sarah; Kubatko, Laura; Carstens, Bryan

    2010-11-01

    We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods

    International Nuclear Information System (INIS)

    Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris

    2016-01-01

    Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.

  17. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  18. Estimating dust distances to Type Ia supernovae from colour excess time evolution

    Science.gov (United States)

    Bulla, M.; Goobar, A.; Amanullah, R.; Feindt, U.; Ferretti, R.

    2018-01-01

    We present a new technique to infer dust locations towards reddened Type Ia supernovae and to help discriminate between an interstellar and a circumstellar origin for the observed extinction. Using Monte Carlo simulations, we show that the time evolution of the light-curve shape and especially of the colour excess E(B - V) places strong constraints on the distance between dust and the supernova. We apply our approach to two highly reddened Type Ia supernovae for which dust distance estimates are available in the literature: SN 2006X and SN 2014J. For the former, we obtain a time-variable E(B - V) and from this derive a distance of 27.5^{+9.0}_{-4.9} or 22.1^{+6.0}_{-3.8} pc depending on whether dust properties typical of the Large Magellanic Cloud (LMC) or the Milky Way (MW) are used. For the latter, instead, we obtain a constant E(B - V) consistent with dust at distances larger than ∼50 and 38 pc for LMC- and MW-type dust, respectively. Values thus extracted are in excellent agreement with previous estimates for the two supernovae. Our findings suggest that dust responsible for the extinction towards these supernovae is likely to be located within interstellar clouds. We also discuss how other properties of reddened Type Ia supernovae - such as their peculiar extinction and polarization behaviour and the detection of variable, blue-shifted sodium features in some of these events - might be compatible with dust and gas at interstellar-scale distances.

  19. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  20. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece; Rashid, Mamoon; Pain, Arnab

    2012-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  1. Shear-wave elastography contributes to accurate tumour size estimation when assessing small breast cancers

    International Nuclear Information System (INIS)

    Mullen, R.; Thompson, J.M.; Moussa, O.; Vinnicombe, S.; Evans, A.

    2014-01-01

    Aim: To assess whether the size of peritumoural stiffness (PTS) on shear-wave elastography (SWE) for small primary breast cancers (≤15 mm) was associated with size discrepancies between grey-scale ultrasound (GSUS) and final histological size and whether the addition of PTS size to GSUS size might result in more accurate tumour size estimation when compared to final histological size. Materials and methods: A retrospective analysis of 86 consecutive patients between August 2011 and February 2013 who underwent breast-conserving surgery for tumours of size ≤15 mm at ultrasound was carried out. The size of PTS stiffness was compared to mean GSUS size, mean histological size, and the extent of size discrepancy between GSUS and histology. PTS size and GSUS were combined and compared to the final histological size. Results: PTS of >3 mm was associated with a larger mean final histological size (16 versus 11.3 mm, p < 0.001). PTS size of >3 mm was associated with a higher frequency of underestimation of final histological size by GSUS of >5 mm (63% versus 18%, p < 0.001). The combination of PTS and GSUS size led to accurate estimation of the final histological size (p = 0.03). The size of PTS was not associated with margin involvement (p = 0.27). Conclusion: PTS extending beyond 3 mm from the grey-scale abnormality is significantly associated with underestimation of tumour size of >5 mm for small invasive breast cancers. Taking into account the size of PTS also led to accurate estimation of the final histological size. Further studies are required to assess the relationship of the extent of SWE stiffness and margin status. - Highlights: • Peritumoural stiffness of greater than 3 mm was associated with larger tumour size. • Underestimation of tumour size by ultrasound was associated with peri-tumoural stiffness size. • Combining peri-tumoural stiffness size to ultrasound produced accurate tumour size estimation

  2. Estimate of lifetime excess lung cancer risk due to indoor exposure to natural radon-222 daughters in Korea

    International Nuclear Information System (INIS)

    Si-Young Chang; Jeong-Ho Lee; Chung-Woo Ha

    1993-01-01

    Lifetime excess lung cancer risk due to indoor 222 Rn daughters exposure in Korea was quantitatively estimated by a modified relative risk projection model proposed by the U.S. National Academy of Science and the recent Korean life table data. The lifetime excess risk of lung cancer death attributable to annual constant exposure to Korean indoor radon daughters was estimated to be about 230/10 6 per WLM, which seemed to be nearly in the median of the range of 150-450/10 6 per WLM reported by the UNSCEAR in 1988. (1 fig., 2 tabs.)

  3. Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model

    Science.gov (United States)

    Ahlgren, K.; Li, X.

    2017-12-01

    Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model

  4. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    International Nuclear Information System (INIS)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup

    2015-01-01

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF

  5. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-02-15

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.

  6. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    Science.gov (United States)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  7. GPS Water Vapor Tomography Based on Accurate Estimations of the GPS Tropospheric Parameters

    Science.gov (United States)

    Champollion, C.; Masson, F.; Bock, O.; Bouin, M.; Walpersdorf, A.; Doerflinger, E.; van Baelen, J.; Brenot, H.

    2003-12-01

    The Global Positioning System (GPS) is now a common technique for the retrieval of zenithal integrated water vapor (IWV). Further applications in meteorology need also slant integrated water vapor (SIWV) which allow to precisely define the high variability of tropospheric water vapor at different temporal and spatial scales. Only precise estimations of IWV and horizontal gradients allow the estimation of accurate SIWV. We present studies developed to improve the estimation of tropospheric water vapor from GPS data. Results are obtained from several field experiments (MAP, ESCOMPTE, OHM-CV, IHOP, .). First IWV are estimated using different GPS processing strategies and results are compared to radiosondes. The role of the reference frame and the a priori constraints on the coordinates of the fiducial and local stations is generally underestimated. It seems to be of first order in the estimation of the IWV. Second we validate the estimated horizontal gradients comparing zenith delay gradients and single site gradients. IWV, gradients and post-fit residuals are used to construct slant integrated water delays. Validation of the SIWV is under progress comparing GPS SIWV, Lidar measurements and high resolution meteorological models (Meso-NH). A careful analysis of the post-fit residuals is needed to separate tropospheric signal from multipaths. The slant tropospheric delays are used to study the 3D heterogeneity of the troposphere. We develop a tomographic software to model the three-dimensional distribution of the tropospheric water vapor from GPS data. The software is applied to the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers operated in southern France. Three inversions have been successfully compared to three successive radiosonde launches. Good resolution is obtained up to heights of 3000 m.

  8. Can administrative health utilisation data provide an accurate diabetes prevalence estimate for a geographical region?

    Science.gov (United States)

    Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin

    2018-05-01

    To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections

    Science.gov (United States)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.

    2017-01-01

    velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.

  10. Estimates of radiation doses in tissue and organs and risk of excess cancer in the single-course radiotherapy patients treated for ankylosing spondylitis in England and Wales

    International Nuclear Information System (INIS)

    Fabrikant, J.I.; Lyman, J.T.

    1982-02-01

    The estimates of absorbed doses of x rays and excess risk of cancer in bone marrow and heavily irradiated sites are extremely crude and are based on very limited data and on a number of assumptions. Some of these assumptions may later prove to be incorrect, but it is probable that they are correct to within a factor of 2. The excess cancer risk estimates calculated compare well with the most reliable epidemiological surveys thus far studied. This is particularly important for cancers of heavily irradiated sites with long latent periods. The mean followup period for the patients was 16.2 y, and an increase in cancers of heavily irradiated sites may appear in these patients in the 1970s in tissues and organs with long latent periods for the induction of cancer. The accuracy of these estimates is severely limited by the inadequacy of information on doses absorbed by the tissues at risk in the irradiated patients. The information on absorbed dose is essential for an accurate assessment of dose-cancer incidence analysis. Furthermore, in this valuable series of irradiated patients, the information on radiation dosimetry on the radiotherapy charts is central to any reliable determination of somatic risks of radiation with regard to carcinogenesis in man. The work necessary to obtain these data is under way; only when they are available can more precise estimates of risk of cancer induction by radiation in man be obtained

  11. Magnetic dipole moment estimation and compensation for an accurate attitude control in nano-satellite missions

    Science.gov (United States)

    Inamori, Takaya; Sako, Nobutada; Nakasuka, Shinichi

    2011-06-01

    Nano-satellites provide space access to broader range of satellite developers and attract interests as an application of the space developments. These days several new nano-satellite missions are proposed with sophisticated objectives such as remote-sensing and observation of astronomical objects. In these advanced missions, some nano-satellites must meet strict attitude requirements for obtaining scientific data or images. For LEO nano-satellite, a magnetic attitude disturbance dominates over other environmental disturbances as a result of small moment of inertia, and this effect should be cancelled for a precise attitude control. This research focuses on how to cancel the magnetic disturbance in orbit. This paper presents a unique method to estimate and compensate the residual magnetic moment, which interacts with the geomagnetic field and causes the magnetic disturbance. An extended Kalman filter is used to estimate the magnetic disturbance. For more practical considerations of the magnetic disturbance compensation, this method has been examined in the PRISM (Pico-satellite for Remote-sensing and Innovative Space Missions). This method will be also used for a nano-astrometry satellite mission. This paper concludes that use of the magnetic disturbance estimation and compensation are useful for nano-satellites missions which require a high accurate attitude control.

  12. Accurate estimation of the RMS emittance from single current amplifier data

    International Nuclear Information System (INIS)

    Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.

    2002-01-01

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H - ion source

  13. Accurate estimation of motion blur parameters in noisy remote sensing image

    Science.gov (United States)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  14. Characterization of a signal recording system for accurate velocity estimation using a VISAR

    Science.gov (United States)

    Rav, Amit; Joshi, K. D.; Singh, Kulbhushan; Kaushik, T. C.

    2018-02-01

    The linearity of a signal recording system (SRS) in time as well as in amplitude are important for the accurate estimation of the free surface velocity history of a moving target during shock loading and unloading when measured using optical interferometers such as a velocity interferometer system for any reflector (VISAR). Signal recording being the first step in a long sequence of signal processes, the incorporation of errors due to nonlinearity, and low signal-to-noise ratio (SNR) affects the overall accuracy and precision of the estimation of velocity history. In shock experiments the small duration (a few µs) of loading/unloading, the reflectivity of moving target surface, and the properties of optical components, control the amount of input of light to the SRS of a VISAR and this in turn affects the linearity and SNR of the overall measurement. These factors make it essential to develop in situ procedures for (i) minimizing the effect of signal induced noise and (ii) determine the linear region of operation for the SRS. Here we report on a procedure for the optimization of SRS parameters such as photodetector gain, optical power, aperture etc, so as to achieve a linear region of operation with a high SNR. The linear region of operation so determined has been utilized successfully to estimate the temporal history of the free surface velocity of the moving target in shock experiments.

  15. An Accurate Estimate of the Free Energy and Phase Diagram of All-DNA Bulk Fluids

    Directory of Open Access Journals (Sweden)

    Emanuele Locatelli

    2018-04-01

    Full Text Available We present a numerical study in which large-scale bulk simulations of self-assembled DNA constructs have been carried out with a realistic coarse-grained model. The investigation aims at obtaining a precise, albeit numerically demanding, estimate of the free energy for such systems. We then, in turn, use these accurate results to validate a recently proposed theoretical approach that builds on a liquid-state theory, the Wertheim theory, to compute the phase diagram of all-DNA fluids. This hybrid theoretical/numerical approach, based on the lowest-order virial expansion and on a nearest-neighbor DNA model, can provide, in an undemanding way, a parameter-free thermodynamic description of DNA associating fluids that is in semi-quantitative agreement with experiments. We show that the predictions of the scheme are as accurate as those obtained with more sophisticated methods. We also demonstrate the flexibility of the approach by incorporating non-trivial additional contributions that go beyond the nearest-neighbor model to compute the DNA hybridization free energy.

  16. SpotCaliper: fast wavelet-based spot detection with accurate size estimation.

    Science.gov (United States)

    Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael

    2016-04-15

    SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  18. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Science.gov (United States)

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  19. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Directory of Open Access Journals (Sweden)

    Alessandro Saccà

    Full Text Available Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  20. Does bioelectrical impedance analysis accurately estimate the condition of threatened and endangered desert fish species?

    Science.gov (United States)

    Dibble, Kimberly L.; Yard, Micheal D.; Ward, David L.; Yackulic, Charles B.

    2017-01-01

    Bioelectrical impedance analysis (BIA) is a nonlethal tool with which to estimate the physiological condition of animals that has potential value in research on endangered species. However, the effectiveness of BIA varies by species, the methodology continues to be refined, and incidental mortality rates are unknown. Under laboratory conditions we tested the value of using BIA in addition to morphological measurements such as total length and wet mass to estimate proximate composition (lipid, protein, ash, water, dry mass, energy density) in the endangered Humpback Chub Gila cypha and Bonytail G. elegans and the species of concern Roundtail Chub G. robusta and conducted separate trials to estimate the mortality rates of these sensitive species. Although Humpback and Roundtail Chub exhibited no or low mortality in response to taking BIA measurements versus handling for length and wet-mass measurements, Bonytails exhibited 14% and 47% mortality in the BIA and handling experiments, respectively, indicating that survival following stress is species specific. Derived BIA measurements were included in the best models for most proximate components; however, the added value of BIA as a predictor was marginal except in the absence of accurate wet-mass data. Bioelectrical impedance analysis improved the R2 of the best percentage-based models by no more than 4% relative to models based on morphology. Simulated field conditions indicated that BIA models became increasingly better than morphometric models at estimating proximate composition as the observation error around wet-mass measurements increased. However, since the overall proportion of variance explained by percentage-based models was low and BIA was mostly a redundant predictor, we caution against the use of BIA in field applications for these sensitive fish species.

  1. Spot urine sodium measurements do not accurately estimate dietary sodium intake in chronic kidney disease12

    Science.gov (United States)

    Dougher, Carly E; Rifkin, Dena E; Anderson, Cheryl AM; Smits, Gerard; Persky, Martha S; Block, Geoffrey A; Ix, Joachim H

    2016-01-01

    Background: Sodium intake influences blood pressure and proteinuria, yet the impact on long-term outcomes is uncertain in chronic kidney disease (CKD). Accurate assessment is essential for clinical and public policy recommendations, but few large-scale studies use 24-h urine collections. Recent studies that used spot urine sodium and associated estimating equations suggest that they may provide a suitable alternative, but their accuracy in patients with CKD is unknown. Objective: We compared the accuracy of 4 equations [the Nerbass, INTERSALT (International Cooperative Study on Salt, Other Factors, and Blood Pressure), Tanaka, and Kawasaki equations] that use spot urine sodium to estimate 24-h sodium excretion in patients with moderate to advanced CKD. Design: We evaluated the accuracy of spot urine sodium to predict mean 24-h urine sodium excretion over 9 mo in 129 participants with stage 3–4 CKD. Spot morning urine sodium was used in 4 estimating equations. Bias, precision, and accuracy were assessed and compared across each equation. Results: The mean age of the participants was 67 y, 52% were female, and the mean estimated glomerular filtration rate was 31 ± 9 mL · min–1 · 1.73 m–2. The mean ± SD number of 24-h urine collections was 3.5 ± 0.8/participant, and the mean 24-h sodium excretion was 168.2 ± 67.5 mmol/d. Although the Tanaka equation demonstrated the least bias (mean: −8.2 mmol/d), all 4 equations had poor precision and accuracy. The INTERSALT equation demonstrated the highest accuracy but derived an estimate only within 30% of mean measured sodium excretion in only 57% of observations. Bland-Altman plots revealed systematic bias with the Nerbass, INTERSALT, and Tanaka equations, underestimating sodium excretion when intake was high. Conclusion: These findings do not support the use of spot urine specimens to estimate dietary sodium intake in patients with CKD and research studies enriched with patients with CKD. The parent data for this

  2. Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome

    Science.gov (United States)

    Teschendorff, Andrew E.; Enver, Tariq

    2017-01-01

    The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes. PMID:28569836

  3. Fast and Accurate Video PQoS Estimation over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Emanuele Viterbo

    2008-06-01

    Full Text Available This paper proposes a curve fitting technique for fast and accurate estimation of the perceived quality of streaming media contents, delivered within a wireless network. The model accounts for the effects of various network parameters such as congestion, radio link power, and video transmission bit rate. The evaluation of the perceived quality of service (PQoS is based on the well-known VQM objective metric, a powerful technique which is highly correlated to the more expensive and time consuming subjective metrics. Currently, PQoS is used only for offline analysis after delivery of the entire video content. Thanks to the proposed simple model, we can estimate in real time the video PQoS and we can rapidly adapt the content transmission through scalable video coding and bit rates in order to offer the best perceived quality to the end users. The designed model has been validated through many different measurements in realistic wireless environments using an ad hoc WiFi test bed.

  4. Excess Molar Volumes of (Propiophenone + Toluene) and Estimated Density of Liquid Propiophenone below Its Melting Temperature

    Czech Academy of Sciences Publication Activity Database

    Morávková, Lenka; Linek, Jan

    2006-01-01

    Roč. 38, č. 10 (2006), s. 1240-1244 ISSN 0021-9614 R&D Projects: GA ČR(CZ) GA203/02/1098 Institutional research plan: CEZ:AV0Z40720504 Keywords : density * excess volume * temperature dependence Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.842, year: 2006

  5. Accurate halo-galaxy mocks from automatic bias estimation and particle mesh gravity solvers

    Science.gov (United States)

    Vakili, Mohammadjavad; Kitaura, Francisco-Shu; Feng, Yu; Yepes, Gustavo; Zhao, Cheng; Chuang, Chia-Hsun; Hahn, ChangHoon

    2017-12-01

    Reliable extraction of cosmological information from clustering measurements of galaxy surveys requires estimation of the error covariance matrices of observables. The accuracy of covariance matrices is limited by our ability to generate sufficiently large number of independent mock catalogues that can describe the physics of galaxy clustering across a wide range of scales. Furthermore, galaxy mock catalogues are required to study systematics in galaxy surveys and to test analysis tools. In this investigation, we present a fast and accurate approach for generation of mock catalogues for the upcoming galaxy surveys. Our method relies on low-resolution approximate gravity solvers to simulate the large-scale dark matter field, which we then populate with haloes according to a flexible non-linear and stochastic bias model. In particular, we extend the PATCHY code with an efficient particle mesh algorithm to simulate the dark matter field (the FASTPM code), and with a robust MCMC method relying on the EMCEE code for constraining the parameters of the bias model. Using the haloes in the BigMultiDark high-resolution N-body simulation as a reference catalogue, we demonstrate that our technique can model the bivariate probability distribution function (counts-in-cells), power spectrum and bispectrum of haloes in the reference catalogue. Specifically, we show that the new ingredients permit us to reach percentage accuracy in the power spectrum up to k ∼ 0.4 h Mpc-1 (within 5 per cent up to k ∼ 0.6 h Mpc-1) with accurate bispectra improving previous results based on Lagrangian perturbation theory.

  6. Modeling Site Heterogeneity with Posterior Mean Site Frequency Profiles Accelerates Accurate Phylogenomic Estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Minh, Bui Quang; Susko, Edward; Roger, Andrew J

    2018-03-01

    Proteins have distinct structural and functional constraints at different sites that lead to site-specific preferences for particular amino acid residues as the sequences evolve. Heterogeneity in the amino acid substitution process between sites is not modeled by commonly used empirical amino acid exchange matrices. Such model misspecification can lead to artefacts in phylogenetic estimation such as long-branch attraction. Although sophisticated site-heterogeneous mixture models have been developed to address this problem in both Bayesian and maximum likelihood (ML) frameworks, their formidable computational time and memory usage severely limits their use in large phylogenomic analyses. Here we propose a posterior mean site frequency (PMSF) method as a rapid and efficient approximation to full empirical profile mixture models for ML analysis. The PMSF approach assigns a conditional mean amino acid frequency profile to each site calculated based on a mixture model fitted to the data using a preliminary guide tree. These PMSF profiles can then be used for in-depth tree-searching in place of the full mixture model. Compared with widely used empirical mixture models with $k$ classes, our implementation of PMSF in IQ-TREE (http://www.iqtree.org) speeds up the computation by approximately $k$/1.5-fold and requires a small fraction of the RAM. Furthermore, this speedup allows, for the first time, full nonparametric bootstrap analyses to be conducted under complex site-heterogeneous models on large concatenated data matrices. Our simulations and empirical data analyses demonstrate that PMSF can effectively ameliorate long-branch attraction artefacts. In some empirical and simulation settings PMSF provided more accurate estimates of phylogenies than the mixture models from which they derive.

  7. A Trace Data-Based Approach for an Accurate Estimation of Precise Utilization Maps in LTE

    Directory of Open Access Journals (Sweden)

    Almudena Sánchez

    2017-01-01

    Full Text Available For network planning and optimization purposes, mobile operators make use of Key Performance Indicators (KPIs, computed from Performance Measurements (PMs, to determine whether network performance needs to be improved. In current networks, PMs, and therefore KPIs, suffer from lack of precision due to an insufficient temporal and/or spatial granularity. In this work, an automatic method, based on data traces, is proposed to improve the accuracy of radio network utilization measurements collected in a Long-Term Evolution (LTE network. The method’s output is an accurate estimate of the spatial and temporal distribution for the cell utilization ratio that can be extended to other indicators. The method can be used to improve automatic network planning and optimization algorithms in a centralized Self-Organizing Network (SON entity, since potential issues can be more precisely detected and located inside a cell thanks to temporal and spatial precision. The proposed method is tested with real connection traces gathered in a large geographical area of a live LTE network and considers overload problems due to trace file size limitations, which is a key consideration when analysing a large network. Results show how these distributions provide a very detailed information of network utilization, compared to cell based statistics.

  8. How accurately can we estimate energetic costs in a marine top predator, the king penguin?

    Science.gov (United States)

    Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J

    2007-01-01

    King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.

  9. Effectiveness of prediction equations in estimating energy expenditure sample of Brazilian and Spanish women with excess body weight

    OpenAIRE

    Lopes Rosado, Eliane; Santiago de Brito, Roberta; Bressan, Josefina; Martínez Hernández, José Alfredo

    2014-01-01

    Objective: To assess the adequacy of predictive equations for estimation of energy expenditure (EE), compared with the EE using indirect calorimetry in a sample of Brazilian and Spanish women with excess body weight Methods: It is a cross-sectional study with 92 obese adult women [26 Brazilian -G1- and 66 Spanish - G2- (aged 20-50)]. Weight and height were evaluated during fasting for the calculation of body mass index and predictive equations. EE was evaluated using the open-circuit indirect...

  10. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  11. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using...

  12. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    Directory of Open Access Journals (Sweden)

    Manuel Gil

    2014-09-01

    Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  13. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    Science.gov (United States)

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  14. Development of Star Tracker System for Accurate Estimation of Spacecraft Attitude

    Science.gov (United States)

    2009-12-01

    For a high- cost spacecraft with accurate pointing requirements, the use of a star tracker is the preferred method for attitude determination. The...solutions, however there are certain costs with using this algorithm. There are significantly more features a triangle can provide when compared to an...to the other. The non-rotating geocentric equatorial frame provides an inertial frame for the two-body problem of a satellite in orbit. In this

  15. Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)

    2017-07-15

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  16. An accurate estimation and optimization of bottom hole back pressure in managed pressure drilling

    Directory of Open Access Journals (Sweden)

    Boniface Aleruchi ORIJI

    2017-06-01

    Full Text Available Managed Pressure Drilling (MPD utilizes a method of applying back pressure to compensate for wellbore pressure losses during drilling. Using a single rheological (Annular Frictional Pressure Losses, AFPL model to estimate the backpressure in MPD operations for all sections of the well may not yield the best result. Each section of the hole was therefore treated independently in this study as data from a case study well were used. As the backpressure is a function of hydrostatic pressure, pore pressure and AFPL, three AFPL models (Bingham plastic, Power law and Herschel Bulkley models were utilized in estimating the backpressure. The estimated backpressure values were compared to the actual field backpressure values in order to obtain the optimum backpressure at the various well depths. The backpressure values estimated by utilizing the power law AFPL model gave the best result for the 12 1/4" hole section (average error % of 1.855% while the back pressures estimated by utilizing the Herschel Bulkley AFPL model gave the best result for the 8 1/2" hole section (average error % of 12.3%. The study showed that for hole sections of turbulent annular flow, the power law AFPL model fits best for estimating the required backpressure while for hole sections of laminar annular flow, the Herschel Bulkley AFPL model fits best for estimating the required backpressure.

  17. Distortion of online reputation by excess reciprocity: quantification and estimation of unbiased reputation

    Science.gov (United States)

    Aste, Tomaso; Livan, Giacomo; Caccioli, Fabio

    The peer-to-peer (P2P) economy relies on establishing trust in distributed networked systems, where the reliability of a user is assessed through digital peer-review processes that aggregate ratings into reputation scores. Here we present evidence of a network effect which biases digital reputations, revealing that P2P networks display exceedingly high levels of reciprocity. In fact, these are so large that they are close to the highest levels structurally compatible with the networks reputation landscape. This indicates that the crowdsourcing process underpinning digital reputation is significantly distorted by the attempt of users to mutually boost reputation, or to retaliate, through the exchange of ratings. We uncover that the least active users are predominantly responsible for such reciprocity-induced bias, and that this fact can be exploited to obtain more reliable reputation estimates. Our findings are robust across different P2P platforms, including both cases where ratings are used to vote on the content produced by users and to vote on user profiles.

  18. Are rapid population estimates accurate? A field trial of two different assessment methods.

    Science.gov (United States)

    Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent

    2006-09-01

    Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.

  19. Fast, accurate, and robust frequency offset estimation based on modified adaptive Kalman filter in coherent optical communication system

    Science.gov (United States)

    Yang, Yanfu; Xiang, Qian; Zhang, Qun; Zhou, Zhongqing; Jiang, Wen; He, Qianwen; Yao, Yong

    2017-09-01

    We propose a joint estimation scheme for fast, accurate, and robust frequency offset (FO) estimation along with phase estimation based on modified adaptive Kalman filter (MAKF). The scheme consists of three key modules: extend Kalman filter (EKF), lock detector, and FO cycle slip recovery. The EKF module estimates time-varying phase induced by both FO and laser phase noise. The lock detector module makes decision between acquisition mode and tracking mode and consequently sets the EKF tuning parameter in an adaptive manner. The third module can detect possible cycle slip in the case of large FO and make proper correction. Based on the simulation and experimental results, the proposed MAKF has shown excellent estimation performance featuring high accuracy, fast convergence, as well as the capability of cycle slip recovery.

  20. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta

    Directory of Open Access Journals (Sweden)

    Rahmann Sven

    2004-06-01

    Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.

  1. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    Science.gov (United States)

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  2. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...

  3. How to efficiently obtain accurate estimates of flower visitation rates by pollinators

    NARCIS (Netherlands)

    Fijen, Thijs P.M.; Kleijn, David

    2017-01-01

    Regional declines in insect pollinators have raised concerns about crop pollination. Many pollinator studies use visitation rate (pollinators/time) as a proxy for the quality of crop pollination. Visitation rate estimates are based on observation durations that vary significantly between studies.

  4. Accurate estimation of influenza epidemics using Google search data via ARGO.

    Science.gov (United States)

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  5. A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates

    CERN Document Server

    Hagedorn, G A

    2004-01-01

    We consider a simple molecular--type quantum system in which the nuclei have one degree of freedom and the electrons have two levels. The Hamiltonian has the form \\[ H(\\epsilon)\\ =\\ -\\,\\frac{\\epsilon^4}2\\, \\frac{\\partial^2\\phantom{i}}{\\partial y^2}\\ +\\ h(y), \\] where $h(y)$ is a $2\\times 2$ real symmetric matrix. Near a local minimum of an electron level ${\\cal E}(y)$ that is not at a level crossing, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer parameter $\\epsilon$ by optimal truncation of the Rayleigh--Schr\\"odinger series. That is, we construct $E_\\epsilon$ and $\\Psi_\\epsilon$, such that $\\|\\Psi_\\epsilon\\|\\,=\\,O(1)$ and \\[ \\|\\,(H(\\epsilon)\\,-\\,E_\\epsilon))\\,\\Psi_\\epsilon\\,\\|\\ 0. \\

  6. Accurate estimation of dose distributions inside an eye irradiated with {sup 106}Ru plaques

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, L.; Sauerwein, W. [Universitaetsklinikum Essen (Germany). NCTeam, Strahlenklinik; Sempau, J.; Zaragoza, F.J. [Universitat Politecnica de Catalunya, Barcelona (Spain). Inst. de Tecniques Energetiques; Wittig, A. [Marburg Univ. (Germany). Klinik fuer Strahlentherapie und Radioonkologie

    2013-01-15

    Background: Irradiation of intraocular tumors requires dedicated techniques, such as brachytherapy with {sup 106}Ru plaques. The currently available treatment planning system relies on the assumption that the eye is a homogeneous water sphere and on simplified radiation transport physics. However, accurate dose distributions and their assessment demand better models for both the eye and the physics. Methods: The Monte Carlo code PENELOPE, conveniently adapted to simulate the beta decay of {sup 106}Ru over {sup 106}Rh into {sup 106}Pd, was used to simulate radiation transport based on a computerized tomography scan of a patient's eye. A detailed geometrical description of two plaques (models CCA and CCB) from the manufacturer BEBIG was embedded in the computerized tomography scan. Results: The simulations were firstly validated by comparison with experimental results in a water phantom. Dose maps were computed for three plaque locations on the eyeball. From these maps, isodose curves and cumulative dose-volume histograms in the eye and for the structures at risk were assessed. For example, it was observed that a 4-mm anterior displacement with respect to a posterior placement of a CCA plaque for treating a posterior tumor would reduce from 40 to 0% the volume of the optic disc receiving more than 80 Gy. Such a small difference in anatomical position leads to a change in the dose that is crucial for side effects, especially with respect to visual acuity. The radiation oncologist has to bring these large changes in absorbed dose in the structures at risk to the attention of the surgeon, especially when the plaque has to be positioned close to relevant tissues. Conclusion: The detailed geometry of an eye plaque in computerized and segmented tomography of a realistic patient phantom was simulated accurately. Dose-volume histograms for relevant anatomical structures of the eye and the orbit were obtained with unprecedented accuracy. This represents an important step

  7. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    Science.gov (United States)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  8. CUFID-query: accurate network querying through random walk based network flow estimation.

    Science.gov (United States)

    Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun

    2017-12-28

    Functional modules in biological networks consist of numerous biomolecules and their complicated interactions. Recent studies have shown that biomolecules in a functional module tend to have similar interaction patterns and that such modules are often conserved across biological networks of different species. As a result, such conserved functional modules can be identified through comparative analysis of biological networks. In this work, we propose a novel network querying algorithm based on the CUFID (Comparative network analysis Using the steady-state network Flow to IDentify orthologous proteins) framework combined with an efficient seed-and-extension approach. The proposed algorithm, CUFID-query, can accurately detect conserved functional modules as small subnetworks in the target network that are expected to perform similar functions to the given query functional module. The CUFID framework was recently developed for probabilistic pairwise global comparison of biological networks, and it has been applied to pairwise global network alignment, where the framework was shown to yield accurate network alignment results. In the proposed CUFID-query algorithm, we adopt the CUFID framework and extend it for local network alignment, specifically to solve network querying problems. First, in the seed selection phase, the proposed method utilizes the CUFID framework to compare the query and the target networks and to predict the probabilistic node-to-node correspondence between the networks. Next, the algorithm selects and greedily extends the seed in the target network by iteratively adding nodes that have frequent interactions with other nodes in the seed network, in a way that the conductance of the extended network is maximally reduced. Finally, CUFID-query removes irrelevant nodes from the querying results based on the personalized PageRank vector for the induced network that includes the fully extended network and its neighboring nodes. Through extensive

  9. Accurate estimation of dose distributions inside an eye irradiated with 106Ru plaques

    International Nuclear Information System (INIS)

    Brualla, L.; Sauerwein, W.; Sempau, J.; Zaragoza, F.J.; Wittig, A.

    2013-01-01

    Background: Irradiation of intraocular tumors requires dedicated techniques, such as brachytherapy with 106 Ru plaques. The currently available treatment planning system relies on the assumption that the eye is a homogeneous water sphere and on simplified radiation transport physics. However, accurate dose distributions and their assessment demand better models for both the eye and the physics. Methods: The Monte Carlo code PENELOPE, conveniently adapted to simulate the beta decay of 106 Ru over 106 Rh into 106 Pd, was used to simulate radiation transport based on a computerized tomography scan of a patient's eye. A detailed geometrical description of two plaques (models CCA and CCB) from the manufacturer BEBIG was embedded in the computerized tomography scan. Results: The simulations were firstly validated by comparison with experimental results in a water phantom. Dose maps were computed for three plaque locations on the eyeball. From these maps, isodose curves and cumulative dose-volume histograms in the eye and for the structures at risk were assessed. For example, it was observed that a 4-mm anterior displacement with respect to a posterior placement of a CCA plaque for treating a posterior tumor would reduce from 40 to 0% the volume of the optic disc receiving more than 80 Gy. Such a small difference in anatomical position leads to a change in the dose that is crucial for side effects, especially with respect to visual acuity. The radiation oncologist has to bring these large changes in absorbed dose in the structures at risk to the attention of the surgeon, especially when the plaque has to be positioned close to relevant tissues. Conclusion: The detailed geometry of an eye plaque in computerized and segmented tomography of a realistic patient phantom was simulated accurately. Dose-volume histograms for relevant anatomical structures of the eye and the orbit were obtained with unprecedented accuracy. This represents an important step toward an optimized

  10. Plant DNA barcodes can accurately estimate species richness in poorly known floras.

    Directory of Open Access Journals (Sweden)

    Craig Costion

    Full Text Available BACKGROUND: Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70% and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. METHODOLOGY/PRINCIPAL FINDINGS: Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. CONCLUSIONS/SIGNIFICANCE: We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.

  11. Optimization of Photospheric Electric Field Estimates for Accurate Retrieval of Total Magnetic Energy Injection

    Science.gov (United States)

    Lumme, E.; Pomoell, J.; Kilpua, E. K. J.

    2017-12-01

    Estimates of the photospheric magnetic, electric, and plasma velocity fields are essential for studying the dynamics of the solar atmosphere, for example through the derivative quantities of Poynting and relative helicity flux and using the fields to obtain the lower boundary condition for data-driven coronal simulations. In this paper we study the performance of a data processing and electric field inversion approach that requires only high-resolution and high-cadence line-of-sight or vector magnetograms, which we obtain from the Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO). The approach does not require any photospheric velocity estimates, and the lacking velocity information is compensated for using ad hoc assumptions. We show that the free parameters of these assumptions can be optimized to reproduce the time evolution of the total magnetic energy injection through the photosphere in NOAA AR 11158, when compared to recent state-of-the-art estimates for this active region. However, we find that the relative magnetic helicity injection is reproduced poorly, reaching at best a modest underestimation. We also discuss the effect of some of the data processing details on the results, including the masking of the noise-dominated pixels and the tracking method of the active region, neither of which has received much attention in the literature so far. In most cases the effect of these details is small, but when the optimization of the free parameters of the ad hoc assumptions is considered, a consistent use of the noise mask is required. The results found in this paper imply that the data processing and electric field inversion approach that uses only the photospheric magnetic field information offers a flexible and straightforward way to obtain photospheric magnetic and electric field estimates suitable for practical applications such as coronal modeling studies.

  12. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  13. Plant DNA barcodes can accurately estimate species richness in poorly known floras.

    Science.gov (United States)

    Costion, Craig; Ford, Andrew; Cross, Hugh; Crayn, Darren; Harrington, Mark; Lowe, Andrew

    2011-01-01

    Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70%) and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.

  14. The Remote Food Photography Method accurately estimates dry powdered foods—the source of calories for many infants

    Science.gov (United States)

    Duhé, Abby F.; Gilmore, L. Anne; Burton, Jeffrey H.; Martin, Corby K.; Redman, Leanne M.

    2016-01-01

    Background Infant formula is a major source of nutrition for infants with over half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water making it necessary to develop methods that can accurately estimate the amount of powder used prior to reconstitution. Objective To assess the use of the Remote Food Photography Method (RFPM) to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. Methods For each serving size (1-scoop, 2-scoop, 3-scoop, and 4-scoop), a set of seven test bottles and photographs were prepared including the recommended gram weight of powdered formula of the respective serving size by the manufacturer, three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended, and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard RFPM analysis procedures. The ratio estimates and the United States Department of Agriculture (USDA) data tables were used to generate food and nutrient information to provide the RFPM estimates. Statistical Analyses Performed Equivalence testing using the two one-sided t- test (TOST) approach was used to determine equivalence between the actual gram weights and the RFPM estimated weights for all samples, within each serving size, and within under-prepared and over-prepared bottles. Results For all bottles, the gram weights estimated by the RFPM were within 5% equivalence bounds with a slight under-estimation of 0.05 g (90% CI [−0.49, 0.40]; p<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. Conclusion The maximum observed mean error was an overestimation of 1.58% of powdered formula by the RFPM under

  15. A practical way to estimate retail tobacco sales violation rates more accurately.

    Science.gov (United States)

    Levinson, Arnold H; Patnaik, Jennifer L

    2013-11-01

    U.S. states annually estimate retailer propensity to sell adolescents cigarettes, which is a violation of law, by staging a single purchase attempt among a random sample of tobacco businesses. The accuracy of single-visit estimates is unknown. We examined this question using a novel test-retest protocol. Supervised minors attempted to purchase cigarettes at all retail tobacco businesses located in 3 Colorado counties. The attempts observed federal standards: Minors were aged 15-16 years, were nonsmokers, and were free of visible tattoos and piercings, and were allowed to enter stores alone or in pairs to purchase a small item while asking for cigarettes and to show or not show genuine identification (ID, e.g., driver's license). Unlike federal standards, stores received a second purchase attempt within a few days unless minors were firmly told not to return. Separate violation rates were calculated for first visits, second visits, and either visit. Eleven minors attempted to purchase cigarettes 1,079 times from 671 retail businesses. One sixth of first visits (16.8%) resulted in a violation; the rate was similar for second visits (15.7%). Considering either visit, 25.3% of businesses failed the test. Factors predictive of violation were whether clerks asked for ID, whether the clerks closely examined IDs, and whether minors included snacks or soft drinks in cigarette purchase attempts. A test-retest protocol for estimating underage cigarette sales detected half again as many businesses in violation as the federally approved one-test protocol. Federal policy makers should consider using the test-retest protocol to increase accuracy and awareness of widespread adolescent access to cigarettes through retail businesses.

  16. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  17. A statistical regression model for the estimation of acrylamide concentrations in French fries for excess lifetime cancer risk assessment.

    Science.gov (United States)

    Chen, Ming-Jen; Hsu, Hui-Tsung; Lin, Cheng-Li; Ju, Wei-Yuan

    2012-10-01

    Human exposure to acrylamide (AA) through consumption of French fries and other foods has been recognized as a potential health concern. Here, we used a statistical non-linear regression model, based on the two most influential factors, cooking temperature and time, to estimate AA concentrations in French fries. The R(2) of the predictive model is 0.83, suggesting the developed model was significant and valid. Based on French fry intake survey data conducted in this study and eight frying temperature-time schemes which can produce tasty and visually appealing French fries, the Monte Carlo simulation results showed that if AA concentration is higher than 168 ppb, the estimated cancer risk for adolescents aged 13-18 years in Taichung City would be already higher than the target excess lifetime cancer risk (ELCR), and that by taking into account this limited life span only. In order to reduce the cancer risk associated with AA intake, the AA levels in French fries might have to be reduced even further if the epidemiological observations are valid. Our mathematical model can serve as basis for further investigations on ELCR including different life stages and behavior and population groups. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Accurate estimate of the relic density and the kinetic decoupling in nonthermal dark matter models

    International Nuclear Information System (INIS)

    Arcadi, Giorgio; Ullio, Piero

    2011-01-01

    Nonthermal dark matter generation is an appealing alternative to the standard paradigm of thermal WIMP dark matter. We reconsider nonthermal production mechanisms in a systematic way, and develop a numerical code for accurate computations of the dark matter relic density. We discuss, in particular, scenarios with long-lived massive states decaying into dark matter particles, appearing naturally in several beyond the standard model theories, such as supergravity and superstring frameworks. Since nonthermal production favors dark matter candidates with large pair annihilation rates, we analyze the possible connection with the anomalies detected in the lepton cosmic-ray flux by Pamela and Fermi. Concentrating on supersymmetric models, we consider the effect of these nonstandard cosmologies in selecting a preferred mass scale for the lightest supersymmetric particle as a dark matter candidate, and the consequent impact on the interpretation of new physics discovered or excluded at the LHC. Finally, we examine a rather predictive model, the G2-MSSM, investigating some of the standard assumptions usually implemented in the solution of the Boltzmann equation for the dark matter component, including coannihilations. We question the hypothesis that kinetic equilibrium holds along the whole phase of dark matter generation, and the validity of the factorization usually implemented to rewrite the system of a coupled Boltzmann equation for each coannihilating species as a single equation for the sum of all the number densities. As a byproduct we develop here a formalism to compute the kinetic decoupling temperature in case of coannihilating particles, which can also be applied to other particle physics frameworks, and also to standard thermal relics within a standard cosmology.

  19. Accurate estimation of short read mapping quality for next-generation genome sequencing

    Science.gov (United States)

    Ruffalo, Matthew; Koyutürk, Mehmet; Ray, Soumya; LaFramboise, Thomas

    2012-01-01

    Motivation: Several software tools specialize in the alignment of short next-generation sequencing reads to a reference sequence. Some of these tools report a mapping quality score for each alignment—in principle, this quality score tells researchers the likelihood that the alignment is correct. However, the reported mapping quality often correlates weakly with actual accuracy and the qualities of many mappings are underestimated, encouraging the researchers to discard correct mappings. Further, these low-quality mappings tend to correlate with variations in the genome (both single nucleotide and structural), and such mappings are important in accurately identifying genomic variants. Approach: We develop a machine learning tool, LoQuM (LOgistic regression tool for calibrating the Quality of short read mappings, to assign reliable mapping quality scores to mappings of Illumina reads returned by any alignment tool. LoQuM uses statistics on the read (base quality scores reported by the sequencer) and the alignment (number of matches, mismatches and deletions, mapping quality score returned by the alignment tool, if available, and number of mappings) as features for classification and uses simulated reads to learn a logistic regression model that relates these features to actual mapping quality. Results: We test the predictions of LoQuM on an independent dataset generated by the ART short read simulation software and observe that LoQuM can ‘resurrect’ many mappings that are assigned zero quality scores by the alignment tools and are therefore likely to be discarded by researchers. We also observe that the recalibration of mapping quality scores greatly enhances the precision of called single nucleotide polymorphisms. Availability: LoQuM is available as open source at http://compbio.case.edu/loqum/. Contact: matthew.ruffalo@case.edu. PMID:22962451

  20. The Remote Food Photography Method Accurately Estimates Dry Powdered Foods-The Source of Calories for Many Infants.

    Science.gov (United States)

    Duhé, Abby F; Gilmore, L Anne; Burton, Jeffrey H; Martin, Corby K; Redman, Leanne M

    2016-07-01

    Infant formula is a major source of nutrition for infants, with more than half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water, making it necessary to develop methods that can accurately estimate the amount of powder used before reconstitution. Our aim was to assess the use of the Remote Food Photography Method to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. For each serving size (1 scoop, 2 scoops, 3 scoops, and 4 scoops), a set of seven test bottles and photographs were prepared as follow: recommended gram weight of powdered formula of the respective serving size by the manufacturer; three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended; and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard Remote Food Photography Method analysis procedures. The ratio estimates and the US Department of Agriculture data tables were used to generate food and nutrient information to provide the Remote Food Photography Method estimates. Equivalence testing using the two one-sided t tests approach was used to determine equivalence between the actual gram weights and the Remote Food Photography Method estimated weights for all samples, within each serving size, and within underprepared and overprepared bottles. For all bottles, the gram weights estimated by the Remote Food Photography Method were within 5% equivalence bounds with a slight underestimation of 0.05 g (90% CI -0.49 to 0.40; P<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. The maximum observed mean error was an overestimation of 1.58% of powdered formula by the Remote

  1. [Estimating and projecting the acute effect of cold spells on excess mortality under climate change in Guangzhou].

    Science.gov (United States)

    Sun, Q H; Wang, W T; Wang, Y W; Li, T T

    2018-04-06

    Objective: To estimate future excess mortality attributable to cold spells in Guangzhou, China. Methods: We collected the mortality data and metrological data from 2009-2013 of Guangzhou to calculated the association between cold spell days and non-accidental mortality with GLM model. Then we projected future daily average temperatures (2020-2039 (2020s) , 2050-2069 (2050s) , 2080-2099 (2080s) ) with 5 GCMs models and 2 RCPs (RCP4.5 and RCP8.5) to identify cold spell days. The baseline period was the 1980s (1980-1999). Finally, calculated the yearly cold spells related excess death of 1980s, 2020s, 2050s, and 2080s with average daily death count of non-cold spell days, exposure-response relationship, and yearly number of cold spell days. Results: The average of daily non-accidental mortality in Guangzhou from 2009 to 2013 was 96, and the average of daily average was 22.0 ℃. Cold spell days were associated with 3.3% (95% CI: 0.4%-6.2%) increase in non-accidental mortality. In 1980s, yearly cold spells related deaths were 34 (95% CI: 4-64). In 2020s, the number will increase by 0-10; in 2050s, the number will increase by 1-9; and in 2080s, will increase by 1-9 under the RCP4.5 scenario. In 2020s, the number will increase by 0-9; in 2050s, the number will increase by 1-6; and in 2080s, will increase by 0-11 under the RCP8.5 scenario. Conclusion: The cold spells related non-accidental deaths in Guangzhou will increase in future under climate change.

  2. Is 10-second electrocardiogram recording enough for accurately estimating heart rate in atrial fibrillation.

    Science.gov (United States)

    Shuai, Wei; Wang, Xi-Xing; Hong, Kui; Peng, Qiang; Li, Ju-Xiang; Li, Ping; Chen, Jing; Cheng, Xiao-Shu; Su, Hai

    2016-07-15

    At present, the estimation of rest heart rate (HR) in atrial fibrillation (AF) is obtained by apical auscultation for 1min or on the surface electrocardiogram (ECG) by multiplying the number of RR intervals on the 10second recording by six. But the reasonability of 10second ECG recording is controversial. ECG was continuously recorded at rest for 60s to calculate the real rest HR (HR60s). Meanwhile, the first 10s and 30s ECG recordings were used for calculating HR10s (sixfold) and HR30s (twofold). The differences of HR10s or HR30s with the HR60s were compared. The patients were divided into three sub-groups on the HR60s 100bpm. No significant difference among the mean HR10s, HR30s and HR60s was found. A positive correlation existed between HR10s and HR60s or HR30s and HR60s. Bland-Altman plot showed that the 95% reference limits were high as -11.0 to 16.0bpm for HR10s, but for HR30s these values were only -4.5 to 5.2bpm. Among the three subgroups with HR60s 100bpm, the 95% reference limits with HR60s were -8.9 to 10.6, -10.5 to 14.0 and -11.3 to 21.7bpm for HR10s, but these values were -3.9 to 4.3, -4.1 to 4.6 and -5.3 to 6.7bpm for HR30s. As 10s ECG recording could not provide clinically accepted estimation HR, ECG should be recorded at least for 30s in the patients with AF. It is better to record ECG for 60s when the HR is rapid. Copyright © 2016. Published by Elsevier Ireland Ltd.

  3. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    Science.gov (United States)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  4. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    Science.gov (United States)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  5. An automated A-value measurement tool for accurate cochlear duct length estimation.

    Science.gov (United States)

    Iyaniwura, John E; Elfarnawany, Mai; Ladak, Hanif M; Agrawal, Sumit K

    2018-01-22

    There has been renewed interest in the cochlear duct length (CDL) for preoperative cochlear implant electrode selection and postoperative generation of patient-specific frequency maps. The CDL can be estimated by measuring the A-value, which is defined as the length between the round window and the furthest point on the basal turn. Unfortunately, there is significant intra- and inter-observer variability when these measurements are made clinically. The objective of this study was to develop an automated A-value measurement algorithm to improve accuracy and eliminate observer variability. Clinical and micro-CT images of 20 cadaveric cochleae specimens were acquired. The micro-CT of one sample was chosen as the atlas, and A-value fiducials were placed onto that image. Image registration (rigid affine and non-rigid B-spline) was applied between the atlas and the 19 remaining clinical CT images. The registration transform was applied to the A-value fiducials, and the A-value was then automatically calculated for each specimen. High resolution micro-CT images of the same 19 specimens were used to measure the gold standard A-values for comparison against the manual and automated methods. The registration algorithm had excellent qualitative overlap between the atlas and target images. The automated method eliminated the observer variability and the systematic underestimation by experts. Manual measurement of the A-value on clinical CT had a mean error of 9.5 ± 4.3% compared to micro-CT, and this improved to an error of 2.7 ± 2.1% using the automated algorithm. Both the automated and manual methods correlated significantly with the gold standard micro-CT A-values (r = 0.70, p value measurement tool using atlas-based registration methods was successfully developed and validated. The automated method eliminated the observer variability and improved accuracy as compared to manual measurements by experts. This open-source tool has the potential to benefit

  6. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    Science.gov (United States)

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  7. Estimation of outdoor and indoor effective dose and excess lifetime cancer risk from Gamma dose rates in Gonabad, Iran

    Energy Technology Data Exchange (ETDEWEB)

    Jafaria, R.; Zarghania, H.; Mohammadia, A., E-mail: rvzreza@gmail.com [Paramedical faculty, Birjand University of Medical Sciences, Birjand (Iran, Islamic Republic of)

    2017-07-01

    Background gamma irradiation in the indoor and outdoor environments is a major concern in the world. The study area was Gonabad city. Three stations and buildings for background radiation measurement of outdoor and indoor were randomly selected and the Geiger-Muller detector (X5C plus) was used. All dose rates on display of survey meter were recorded and mean of all data in each station and buildings was computed and taken as measured dose rate of that particular station. The average dose rates of background radiation were 84.2 nSv/h for outdoor and 108.6 nSv/h for indoor, maximum and minimum dose rates were 88.9 nSv/h and 77.7 nSv/h for outdoor measurements and 125.4 nSv/h and 94.1 nSv/h for indoor measurements, respectively. Results show that the annual effective dose is 0.64 mSv, which compare to global level of the annual effective dose 0.48 mSv is high. Estimated excess lifetime cancer risk was 2.24×10{sup -3} , indicated that it is large compared to the world average value of 0.25×10{sup -3}. (author)

  8. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    Science.gov (United States)

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  9. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    Science.gov (United States)

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  10. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Directory of Open Access Journals (Sweden)

    Abel B Minyoo

    2015-12-01

    Full Text Available In this study we show that incentives (dog collars and owner wristbands are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey. Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  11. An Accurate Computational Tool for Performance Estimation of FSO Communication Links over Weak to Strong Atmospheric Turbulent Channels

    Directory of Open Access Journals (Sweden)

    Theodore D. Katsilieris

    2017-03-01

    Full Text Available The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using

  12. [Estimation of the excess death associated with influenza pandemics and epidemics in Japan after world war II: relation with pandemics and the vaccination system].

    Science.gov (United States)

    Ohmi, Kenichi; Marui, Eiji

    2011-10-01

    To estimate the excess death associated with influenza pandemics and epidemics in Japan after World War II, and to reexamine the relationship between the excess death and the vaccination system in Japan. Using the Japanese national vital statistics data for 1952-2009, we specified months with influenza epidemics, monthly mortality rates and the seasonal index for 1952-74 and for 1975-2009. Then we calculated excess deaths of each month from the observed number of deaths and the 95% range of expected deaths. Lastly we calculated age-adjusted excess death rates using the 1985 model population of Japan. The total number of excess deaths for 1952-2009 was 687,279 (95% range, 384,149-970,468), 12,058 (95% range, 6,739-17,026) per year. The total number of excess deaths in 6 pandemic years of 1957-58, 58-59, 1968-69, 69-70, 77-78 and 78-79, was 95,904, while that in 51 'non-pandemic' years was 591,376, 6.17 fold larger than pandemic years. The average number of excess deaths for pandemic years was 23,976, nearly equal to that for 'non-pandemic' years, 23,655. At the beginning of pandemics, 1957-58, 1968-69, 1969-70, the proportion of those aged pandemic' years. In the 1970s and 1980s, when the vaccination program for schoolchildren was mandatory in Japan on the basis of the "Fukumi thesis", age-adjusted average excess mortality rates were relatively low, with an average of 6.17 per hundred thousand. In the 1990s, when group vaccination was discontinued, age-adjusted excess mortality rose up to 9.42, only to drop again to 2.04 when influenza vaccination was made available to the elderly in the 2000s, suggesting that the vaccination of Japanese children prevented excess deaths from influenza pandemics and epidemics. Moreover, in the age group under 65, average excess mortality rates were low in the 1970s and 1980s rather than in the 2000s, which shows that the "Social Defensive" schoolchildren vaccination program in the 1970s and 1980s was more effective than the

  13. Fast and accurate phylogenetic reconstruction from high-resolution whole-genome data and a novel robustness estimator.

    Science.gov (United States)

    Lin, Y; Rajan, V; Moret, B M E

    2011-09-01

    The rapid accumulation of whole-genome data has renewed interest in the study of genomic rearrangements. Comparative genomics, evolutionary biology, and cancer research all require models and algorithms to elucidate the mechanisms, history, and consequences of these rearrangements. However, even simple models lead to NP-hard problems, particularly in the area of phylogenetic analysis. Current approaches are limited to small collections of genomes and low-resolution data (typically a few hundred syntenic blocks). Moreover, whereas phylogenetic analyses from sequence data are deemed incomplete unless bootstrapping scores (a measure of confidence) are given for each tree edge, no equivalent to bootstrapping exists for rearrangement-based phylogenetic analysis. We describe a fast and accurate algorithm for rearrangement analysis that scales up, in both time and accuracy, to modern high-resolution genomic data. We also describe a novel approach to estimate the robustness of results-an equivalent to the bootstrapping analysis used in sequence-based phylogenetic reconstruction. We present the results of extensive testing on both simulated and real data showing that our algorithm returns very accurate results, while scaling linearly with the size of the genomes and cubically with their number. We also present extensive experimental results showing that our approach to robustness testing provides excellent estimates of confidence, which, moreover, can be tuned to trade off thresholds between false positives and false negatives. Together, these two novel approaches enable us to attack heretofore intractable problems, such as phylogenetic inference for high-resolution vertebrate genomes, as we demonstrate on a set of six vertebrate genomes with 8,380 syntenic blocks. A copy of the software is available on demand.

  14. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    Science.gov (United States)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  15. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  16. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus?

    Directory of Open Access Journals (Sweden)

    Corina J. Logan

    2015-06-01

    Full Text Available There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size, as measured by computerized tomography (CT scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex.

  17. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    Science.gov (United States)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  18. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    Directory of Open Access Journals (Sweden)

    Andreas Tuerk

    2017-05-01

    Full Text Available Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare", a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  19. Estimates of excess medically attended acute respiratory infections in periods of seasonal and pandemic influenza in Germany from 2001/02 to 2010/11.

    Directory of Open Access Journals (Sweden)

    Matthias An der Heiden

    Full Text Available BACKGROUND: The number of patients seeking health care is a central indicator that may serve several different purposes: (1 as a proxy for the impact on the burden of the primary care system; (2 as a starting point to estimate the number of persons ill with influenza; (3 as the denominator data for the calculation of case fatality rate and the proportion hospitalized (severity indicators; (4 for economic calculations. In addition, reliable estimates of burden of disease and on the health care system are essential to communicate the impact of influenza to health care professionals, public health professionals and to the public. METHODOLOGY/PRINCIPAL FINDINGS: Using German syndromic surveillance data, we have developed a novel approach to describe the seasonal variation of medically attended acute respiratory infections (MAARI and estimate the excess MAARI attributable to influenza. The weekly excess inside a period of influenza circulation is estimated as the difference between the actual MAARI and a MAARI-baseline, which is established using a cyclic regression model for counts. As a result, we estimated the highest ARI burden within the last 10 years for the influenza season 2004/05 with an excess of 7.5 million outpatient visits (CI95% 6.8-8.0. In contrast, the pandemic wave 2009 accounted for one third of this burden with an excess of 2.4 million (CI95% 1.9-2.8. Estimates can be produced for different age groups, different geographic regions in Germany and also in real time during the influenza waves.

  20. Challenges associated with drunk driving measurement: combining police and self-reported data to estimate an accurate prevalence in Brazil.

    Science.gov (United States)

    Sousa, Tanara; Lunnen, Jeffrey C; Gonçalves, Veralice; Schmitz, Aurinez; Pasa, Graciela; Bastos, Tamires; Sripad, Pooja; Chandran, Aruna; Pechansky, Flavio

    2013-12-01

    Drunk driving is an important risk factor for road traffic crashes, injuries and deaths. After June 2008, all drivers in Brazil were subject to a "Zero Tolerance Law" with a set breath alcohol concentration of 0.1 mg/L of air. However, a loophole in this law enabled drivers to refuse breath or blood alcohol testing as it may self-incriminate. The reported prevalence of drunk driving is therefore likely a gross underestimate in many cities. To compare the prevalence of drunk driving gathered from police reports to the prevalence gathered from self-reported questionnaires administered at police sobriety roadblocks in two Brazilian capital cities, and to estimate a more accurate prevalence of drunk driving utilizing three correction techniques based upon information from those questionnaires. In August 2011 and January-February 2012, researchers from the Centre for Drug and Alcohol Research at the Universidade Federal do Rio Grande do Sul administered a roadside interview on drunk driving practices to 805 voluntary participants in the Brazilian capital cities of Palmas and Teresina. Three techniques which include measures such as the number of persons reporting alcohol consumption in the last six hours but who had refused breath testing were used to estimate the prevalence of drunk driving. The prevalence of persons testing positive for alcohol on their breath was 8.8% and 5.0% in Palmas and Teresina respectively. Utilizing a correction technique we calculated that a more accurate prevalence in these sites may be as high as 28.2% and 28.7%. In both cities, about 60% of drivers who self-reported having drank within six hours of being stopped by the police either refused to perform breathalyser testing; fled the sobriety roadblock; or were not offered the test, compared to about 30% of drivers that said they had not been drinking. Despite the reduction of the legal limit for drunk driving stipulated by the "Zero Tolerance Law," loopholes in the legislation permit many

  1. Existing equations to estimate lean body mass are not accurate in the critically ill: Results of a multicenter observational study.

    Science.gov (United States)

    Moisey, Lesley L; Mourtzakis, Marina; Kozar, Rosemary A; Compher, Charlene; Heyland, Daren K

    2017-12-01

    Lean body mass (LBM), quantified using computed tomography (CT), is a significant predictor of clinical outcomes in the critically ill. While CT analysis is precise and accurate in measuring body composition, it may not be practical or readily accessible to all patients in the intensive care unit (ICU). Here, we assessed the agreement between LBM measured by CT and four previously developed equations that predict LBM using variables (i.e. age, sex, weight, height) commonly recorded in the ICU. LBM was calculated in 327 critically ill adults using CT scans, taken at ICU admission, and 4 predictive equations (E1-4) that were derived from non-critically adults since there are no ICU-specific equations. Agreement was assessed using paired t-tests, Pearson's correlation coefficients and Bland-Altman plots. Median LBM calculated by CT was 45 kg (IQR 37-53 kg) and was significantly different (p LBM (error ranged from 7.5 to 9.9 kg), compared with LBM calculated by CT, suggesting insufficient agreement. Our data indicates a large bias is present between the calculation of LBM by CT imaging and the predictive equations that have been compared here. This underscores the need for future research toward the development of ICU-specific equations that reliably estimate LBM in a practical and cost-effective manner. Copyright © 2016 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  2. A deep learning approach to estimate stress distribution: a fast and accurate surrogate of finite-element analysis.

    Science.gov (United States)

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-01-01

    Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).

  3. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    Science.gov (United States)

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  4. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation

    International Nuclear Information System (INIS)

    Subramanian, Swetha; Mast, T Douglas

    2015-01-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. (note)

  5. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    Science.gov (United States)

    Subramanian, Swetha; Mast, T Douglas

    2015-10-07

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  6. A method for estimating peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area

    Science.gov (United States)

    Asquith, William H.; Cleveland, Theodore G.; Roussel, Meghan C.

    2011-01-01

    Estimates of peak and time of peak streamflow for small watersheds (less than about 640 acres) in a suburban to urban, low-slope setting are needed for drainage design that is cost-effective and risk-mitigated. During 2007-10, the U.S. Geological Survey (USGS), in cooperation with the Harris County Flood Control District and the Texas Department of Transportation, developed a method to estimate peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area. To develop the method, 24 watersheds in the study area with drainage areas less than about 3.5 square miles (2,240 acres) and with concomitant rainfall and runoff data were selected. The method is based on conjunctive analysis of rainfall and runoff data in the context of the unit hydrograph method and the rational method. For the unit hydrograph analysis, a gamma distribution model of unit hydrograph shape (a gamma unit hydrograph) was chosen and parameters estimated through matching of modeled peak and time of peak streamflow to observed values on a storm-by-storm basis. Watershed mean or watershed-specific values of peak and time to peak ("time to peak" is a parameter of the gamma unit hydrograph and is distinct from "time of peak") of the gamma unit hydrograph were computed. Two regression equations to estimate peak and time to peak of the gamma unit hydrograph that are based on watershed characteristics of drainage area and basin-development factor (BDF) were developed. For the rational method analysis, a lag time (time-R), volumetric runoff coefficient, and runoff coefficient were computed on a storm-by-storm basis. Watershed-specific values of these three metrics were computed. A regression equation to estimate time-R based on drainage area and BDF was developed. Overall arithmetic means of volumetric runoff coefficient (0.41 dimensionless) and runoff coefficient (0.25 dimensionless) for the 24 watersheds were used to express the rational

  7. How accurate are adolescents in portion-size estimation using the computer tool young adolescents' nutrition assessment on computer (YANA-C)?

    OpenAIRE

    Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea

    2010-01-01

    Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amou...

  8. Cystatin C-Based Equation Does Not Accurately Estimate the Glomerular Filtration in Japanese Living Kidney Donors.

    Science.gov (United States)

    Tsujimura, Kazuma; Ota, Morihito; Chinen, Kiyoshi; Adachi, Takayuki; Nagayama, Kiyomitsu; Oroku, Masato; Nishihira, Morikuni; Shiohira, Yoshiki; Iseki, Kunitoshi; Ishida, Hideki; Tanabe, Kazunari

    2017-06-23

    BACKGROUND Precise evaluation of a living donor's renal function is necessary to ensure adequate residual kidney function after donor nephrectomy. Our aim was to evaluate the feasibility of estimating glomerular filtration rate (GFR) using serum cystatin-C prior to kidney transplantation. MATERIAL AND METHODS Using the equations of the Japanese Society of Nephrology, we calculated the GFR using serum creatinine (eGFRcre) and cystatin C levels (eGFRcys) for 83 living kidney donors evaluated between March 2010 and March 2016. We compared eGFRcys and eGFRcre values against the creatinine clearance rate (CCr). RESULTS The study population included 27 males and 56 females. The mean eGFRcys, eGFRcre, and CCr were, 91.4±16.3 mL/min/1.73 m² (range, 59.9-128.9 mL/min/1.73 m²), 81.5±14.2 mL/min/1.73 m² (range, 55.4-117.5 mL/min/1.73 m²) and 108.4±21.6 mL/min/1.73 m² (range, 63.7-168.7 mL/min/1.73 m²), respectively. eGFRcys was significantly lower than CCr (p<0.001). The correlation coefficient between eGFRcys and CCr values was 0.466, and the mean difference between the two values was -17.0 (15.7%), with a root mean square error of 19.2. Thus, eGFRcre was significantly lower than CCr (p<0.001). The correlation coefficient between eGFRcre and CCr values was 0.445, and the mean difference between the two values was -26.9 (24.8%), with a root mean square error of 19.5. CONCLUSIONS Although eGFRcys provided a better estimation of GFR than eGFRcre, eGFRcys still did not provide an accurate measure of kidney function in Japanese living kidney donors.

  9. Use of excess 210Pb and 228Th to estimate rates of sediment accumulation and bioturbation in Port Phillip Bay, Australia

    International Nuclear Information System (INIS)

    Hancock, G.J.; Hunter, J.R.

    1999-01-01

    Rates of sediment accumulation, sediment mixing and depositional particle fluxes were estimated by use of excess 210 Pb and 228 Th. In central Port Phillip Bay, there was a rapidly mixed surface layer and two layers of different mixing rates at 2-20 cm and 2145 cm depths. When the sediment profiles of excess 210 Pb and 228 Th were combined and diffusive mixing was assumed, the sediment accumulation rate in the 2-20 cm layer was constrained to be -1 . The mixing coefficient in the 2-20 cm layer was 5.0 ± 0.1 cm 2 year -1 . Hence, mixing rather than sedimentation governs the distribution of 210 Pb and 228 Th in the surficial 20 cm. Below 20 cm, the different mixing regime may be due to the dominance of deposit-feeders at these depths. Evidence for bioturbation to a depth of 50 cm was obtained from profiles of excess 210 Pb and 228 Ra deficiency. The mean residence time of particles in the central bay water column was 10 ± 2 days (a normalized depositional particle flux of 0.16 ± 0.02 g cm -2 year -1 ). This flux is three times the upper estimate of the sediment accumulation rate, indicating that most of the suspended particulate matter in the water column is resuspended bottom sediment. Copyright (1997) CSIRO Publishing

  10. Excessive Daytime Sleepiness

    Directory of Open Access Journals (Sweden)

    Yavuz Selvi

    2016-06-01

    Full Text Available Excessive daytime sleepiness is one of the most common sleep-related patient symptoms, with preva-lence in the community estimated to be as high as 18%. Patients with excessive daytime sleepiness may exhibit life threatening road and work accidents, social maladjustment, decreased academic and occupational performance and have poorer health than comparable adults. Thus, excessive daytime sleepiness is a serious condition that requires investigation, diagnosis and treatment primarily. As with most medical condition, evaluation of excessive daytime sleepiness begins a precise history and various objective and subjective tools have been also developed to assess excessive daytime sleepiness. The most common causes of excessive daytime sleepiness are insufficient sleep hygiene, chronic sleep deprivation, medical and psychiatric conditions and sleep disorders, such as obstructive sleep apnea, medications, and narcolepsy. Treatment option should address underlying contributors and promote sleep quantity by ensuring good sleep hygiene. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(2: 114-132

  11. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    Science.gov (United States)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  12. On the Specification of the Gravity Model of Trade: Zeros, Excess Zeros and Zero-Inflated Estimation

    NARCIS (Netherlands)

    M.J. Burger (Martijn); F.G. van Oort (Frank); G.J.M. Linders (Gert-Jan)

    2009-01-01

    textabstractConventional studies of bilateral trade patterns specify a log-normal gravity equation for empirical estimation. However, the log-normal gravity equation suffers from three problems: the bias created by the logarithmic transformation, the failure of the homoscedasticity assumption, and

  13. Estimating the decline in excess risk of cerebrovascular disease following quitting smoking--a systematic review based on the negative exponential model.

    Science.gov (United States)

    Lee, Peter N; Fry, John S; Thornton, Alison J

    2014-02-01

    We attempted to quantify the decline in stroke risk following quitting using the negative exponential model, with methodology previously employed for IHD. We identified 22 blocks of RRs (from 13 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We tried to estimate the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block. The method failed to converge or produced very variable estimates of H in nine blocks with a current smoker RR <1.40. Rejecting these, and combining blocks by amount smoked in one study where problems arose in model-fitting, the final analyses used 11 blocks. Goodness-of-fit was adequate for each block, the combined estimate of H being 4.78(95%CI 2.17-10.50) years. However, considerable heterogeneity existed, unexplained by any factor studied, with the random-effects estimate 3.08(1.32-7.16). Sensitivity analyses allowing for reverse causation or differing assumed times for the final quitting period gave similar results. The estimates of H are similar for stroke and IHD, and the individual estimates similarly heterogeneous. Fitting the model is harder for stroke, due to its weaker association with smoking. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Estimation of the excess lifetime cancer risk from radon exposure in some buildings of Kufa Technical Institute, Iraq

    Directory of Open Access Journals (Sweden)

    Ali Abid Abojassim

    2017-12-01

    Full Text Available A number of international health organizations consider the exposure to residential radon as the second main cause of lung cancer after cigarette smoking. It was found that there is no database on radon concentrations for the Kufa Technical Institute buildings in the literature. This therefore triggers a special need for radon measurement in some Kufa Technical Institute buildings. This study aims to investigate the indoor radon levels inside the Kufa Technical Institute buildings for the first time using different radon measurement methods such as active (RAD-7 and passive (LR-115 Type II methods. Seventy eight of Solid-State Nuclear Track Detectors (SSNTDs LR-115 Type II were distributed at four buildings within the study area. The LR-115 Type II detectors were exposed in the study area for three months period. In parallel to the latter, seventy two active measurements were conducted using RAD-7 in the same buildings for correlation investigation purposes between the two kinds of measurements (i.e. passive and active.The results demonstrate that the radon concentrations were generally low, ranging from 38.4 to 77.2 Bq/m3, with a mean value of 50 Bq/m3. The mean of the equilibrium equivalent radon concentration and annual effective dose were assessed to be 19.9 Bq/m3 and 1.2 mS/y, respectively; the excess lifetime lung cancer risk was approximately 11.6 per million personal. A high correlation was found between the methods of measurements (i.e. LR-115 Type II and RAD-7, R2 = 0.99 which is significant at P < 0.001. The results of this work revealed that the Radon concentration was below the action level set by the United States Environmental Protection Agency of 148 Bq/m3. This therefore indicates that no radiological health hazard exists. However, the relatively high concentrations in some classrooms can be addressed by the natural ventilation or the classrooms being supplied with suction fans.

  15. Estimating the decline in excess risk of chronic obstructive pulmonary disease following quitting smoking - a systematic review based on the negative exponential model.

    Science.gov (United States)

    Lee, Peter N; Fry, John S; Forey, Barbara A

    2014-03-01

    We quantified the decline in COPD risk following quitting using the negative exponential model, as previously carried out for other smoking-related diseases. We identified 14 blocks of RRs (from 11 studies) comparing current smokers, former smokers (by time quit) and never smokers, some studies providing sex-specific blocks. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We estimated the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block, except for one where no decline with quitting was evident, and H was not estimable. For the remaining 13 blocks, goodness-of-fit to the model was generally adequate, the combined estimate of H being 13.32 (95% CI 11.86-14.96) years. There was no heterogeneity in H, overall or by various studied sources. Sensitivity analyses allowing for reverse causation or different assumed times for the final quitting period little affected the results. The model summarizes quitting data well. The estimate of 13.32years is substantially larger than recent estimates of 4.40years for ischaemic heart disease and 4.78years for stroke, and also larger than the 9.93years for lung cancer. Heterogeneity was unimportant for COPD, unlike for the other three diseases. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Estimation of excess mortality due to long-term exposure to PM2.5 in Japan using a high-resolution model for present and future scenarios

    Science.gov (United States)

    Goto, Daisuke; Ueda, Kayo; Ng, Chris Fook Sheng; Takami, Akinori; Ariga, Toshinori; Matsuhashi, Keisuke; Nakajima, Teruyuki

    2016-09-01

    Particulate matter with a diameter of less than 2.5 μm, known as PM2.5, can affect human health, especially in elderly people. Because of the imminent aging of society in the near future in most developed countries, the human health impacts of PM2.5 must be evaluated. In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a high-resolution stretched grid system (∼10 km for the high-resolution model, HRM) for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). We also used the same model with a low-resolution uniform grid system (∼100 km for the low-resolution model, LRM). These calculations were conducted by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 among the elderly (over 65 years old) based on different minimum PM2.5 concentration (MINPM) levels to account for uncertainty using the simulated PM2.5 distributions to express the health effect as a concentration-response function. As a result, we estimated the excess mortality for all of Japan to be 31,300 (95% confidence intervals: 20,700 to 42,600) people in 2000 and 28,600 (95% confidence intervals: 19,000 to 38,700) people in 2030 using the HRM with a MINPM of 5.8 μg/m3. In contrast, the LRM resulted in underestimates of approximately 30% (for PM2.5 concentrations in the 2000 and 2030), approximately 60% (excess mortality in the 2000) and approximately 90% (excess mortality in 2030) compared to the HRM results. We also found that the uncertainty in the MINPM value, especially for low PM2.5 concentrations in the future (2030) can cause large variability in the estimates, ranging from 0 (MINPM of 15 μg/m3 in both HRM and LRM) to 95,000 (MINPM of 0 μg/m3 in HRM) people.

  17. ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models.

    Science.gov (United States)

    Maghrabi, Ali H A; McGuffin, Liam J

    2017-07-03

    Methods that reliably estimate the likely similarity between the predicted and native structures of proteins have become essential for driving the acceptance and adoption of three-dimensional protein models by life scientists. ModFOLD6 is the latest version of our leading resource for Estimates of Model Accuracy (EMA), which uses a pioneering hybrid quasi-single model approach. The ModFOLD6 server integrates scores from three pure-single model methods and three quasi-single model methods using a neural network to estimate local quality scores. Additionally, the server provides three options for producing global score estimates, depending on the requirements of the user: (i) ModFOLD6_rank, which is optimized for ranking/selection, (ii) ModFOLD6_cor, which is optimized for correlations of predicted and observed scores and (iii) ModFOLD6 global for balanced performance. The ModFOLD6 methods rank among the top few for EMA, according to independent blind testing by the CASP12 assessors. The ModFOLD6 server is also continuously automatically evaluated as part of the CAMEO project, where significant performance gains have been observed compared to our previous server and other publicly available servers. The ModFOLD6 server is freely available at: http://www.reading.ac.uk/bioinf/ModFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    Science.gov (United States)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  19. Excessive growth.

    Science.gov (United States)

    Narayanaswamy, Vasudha; Rettig, Kenneth R; Bhowmick, Samar K

    2008-09-01

    Tall stature and excessive growth syndrome are a relatively rare concern in pediatric practice. Nevertheless, it is important to identify abnormal accelerated growth patterns in children, which may be the clue in the diagnosis of an underlying disorder. We present a case of pituitary gigantism in a 2 1/2-year-old child and discuss the signs, symptoms, laboratory findings, and the treatment. Brief discussions on the differential diagnosis of excessive growth/tall stature have been outlined. Pituitary gigantism is very rare in the pediatrics age group; however, it is extremely rare in a child that is less than 3 years of age. The nature of pituitary adenoma and treatment options in children with this condition have also been discussed.

  20. Genomic instability related to zinc deficiency and excess in an in vitro model: is the upper estimate of the physiological requirements recommended for children safe?

    Science.gov (United States)

    Padula, Gisel; Ponzinibbio, María Virginia; Gambaro, Rocío Celeste; Seoane, Analía Isabel

    2017-08-01

    Micronutrients are important for the prevention of degenerative diseases due to their role in maintaining genomic stability. Therefore, there is international concern about the need to redefine the optimal mineral and vitamin requirements to prevent DNA damage. We analyzed the cytostatic, cytotoxic, and genotoxic effect of in vitro zinc supplementation to determine the effects of zinc deficiency and excess and whether the upper estimate of the physiological requirement recommended for children is safe. To achieve zinc deficiency, DMEM/Ham's F12 medium (HF12) was chelated (HF12Q). Lymphocytes were isolated from healthy female donors (age range, 5-10 yr) and cultured for 7 d as follows: negative control (HF12, 60 μg/dl ZnSO 4 ); deficient (HF12Q, 12 μg/dl ZnSO 4 ); lower level (HF12Q + 80 μg/dl ZnSO 4 ); average level (HF12Q + 180 μg/dl ZnSO 4 ); upper limit (HF12Q + 280 μg/dl ZnSO 4 ); and excess (HF12Q + 380 μg/dl ZnSO 4 ). The comet (quantitative analysis) and cytokinesis-block micronucleus cytome assays were used. Differences were evaluated with Kruskal-Wallis and ANOVA (p < 0.05). Olive tail moment, tail length, micronuclei frequency, and apoptotic and necrotic percentages were significantly higher in the deficient, upper limit, and excess cultures compared with the negative control, lower, and average limit ones. In vitro zinc supplementation at the lower and average limit (80 and 180 μg/dl ZnSO 4 ) of the physiological requirement recommended for children proved to be the most beneficial in avoiding genomic instability, whereas the deficient, upper limit, and excess (12, 280, and 380 μg/dl) cultures increased DNA and chromosomal damage and apoptotic and necrotic frequencies.

  1. Mixing the Green-Ampt model and Curve Number method as an empirical tool for rainfall excess estimation in small ungauged catchments.

    Science.gov (United States)

    Grimaldi, S.; Petroselli, A.; Romano, N.

    2012-04-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model that is widely used to estimate direct runoff from small and ungauged basins. The SCS-CN is a simple and valuable approach to estimate the total stream-flow volume generated by a storm rainfall, but it was developed to be used with daily rainfall data. To overcome this drawback, we propose to include the Green-Ampt (GA) infiltration model into a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt), aiming to distribute in time the information provided by the SCS-CN method so as to provide estimation of sub-daily incremental rainfall excess. For a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model. The proposed procedure was evaluated by analyzing 100 rainfall-runoff events observed in four small catchments of varying size. CN4GA appears an encouraging tool for predicting the net rainfall peak and duration values and has shown, at least for the test cases considered in this study, a better agreement with observed hydrographs than that of the classic SCS-CN method.

  2. Estimating Accurate Target Coordinates with Magnetic Resonance Images by Using Multiple Phase-Encoding Directions during Acquisition.

    Science.gov (United States)

    Kim, Minsoo; Jung, Na Young; Park, Chang Kyu; Chang, Won Seok; Jung, Hyun Ho; Chang, Jin Woo

    2018-06-01

    Stereotactic procedures are image guided, often using magnetic resonance (MR) images limited by image distortion, which may influence targets for stereotactic procedures. The aim of this work was to assess methods of identifying target coordinates for stereotactic procedures with MR in multiple phase-encoding directions. In 30 patients undergoing deep brain stimulation, we acquired 5 image sets: stereotactic brain computed tomography (CT), T2-weighted images (T2WI), and T1WI in both right-to-left (RL) and anterior-to-posterior (AP) phase-encoding directions. Using CT coordinates as a reference, we analyzed anterior commissure and posterior commissure coordinates to identify any distortion relating to phase-encoding direction. Compared with CT coordinates, RL-directed images had more positive x-axis values (0.51 mm in T1WI, 0.58 mm in T2WI). AP-directed images had more negative y-axis values (0.44 mm in T1WI, 0.59 mm in T2WI). We adopted 2 methods to predict CT coordinates with MR image sets: parallel translation and selective choice of axes according to phase-encoding direction. Both were equally effective at predicting CT coordinates using only MR; however, the latter may be easier to use in clinical settings. Acquiring MR in multiple phase-encoding directions and selecting axes according to the phase-encoding direction allows identification of more accurate coordinates for stereotactic procedures. © 2018 S. Karger AG, Basel.

  3. Development of Deep Learning Based Data Fusion Approach for Accurate Rainfall Estimation Using Ground Radar and Satellite Precipitation Products

    Science.gov (United States)

    Chen, H.; Chandra, C. V.; Tan, H.; Cifelli, R.; Xie, P.

    2016-12-01

    Rainfall estimation based on onboard satellite measurements has been an important topic in satellite meteorology for decades. A number of precipitation products at multiple time and space scales have been developed based upon satellite observations. For example, NOAA Climate Prediction Center has developed a morphing technique (i.e., CMORPH) to produce global precipitation products by combining existing space based rainfall estimates. The CMORPH products are essentially derived based on geostationary satellite IR brightness temperature information and retrievals from passive microwave measurements (Joyce et al. 2004). Although the space-based precipitation products provide an excellent tool for regional and global hydrologic and climate studies as well as improved situational awareness for operational forecasts, its accuracy is limited due to the sampling limitations, particularly for extreme events such as very light and/or heavy rain. On the other hand, ground-based radar is more mature science for quantitative precipitation estimation (QPE), especially after the implementation of dual-polarization technique and further enhanced by urban scale radar networks. Therefore, ground radars are often critical for providing local scale rainfall estimation and a "heads-up" for operational forecasters to issue watches and warnings as well as validation of various space measurements and products. The CASA DFW QPE system, which is based on dual-polarization X-band CASA radars and a local S-band WSR-88DP radar, has demonstrated its excellent performance during several years of operation in a variety of precipitation regimes. The real-time CASA DFW QPE products are used extensively for localized hydrometeorological applications such as urban flash flood forecasting. In this paper, a neural network based data fusion mechanism is introduced to improve the satellite-based CMORPH precipitation product by taking into account the ground radar measurements. A deep learning system is

  4. HIV Excess Cancers JNCI

    Science.gov (United States)

    In 2010, an estimated 7,760 new cancers were diagnosed among the nearly 900,000 Americans known to be living with HIV infection. According to the first comprehensive study in the United States, approximately half of these cancers were in excess of what wo

  5. Albuminuria and neck circumference are determinate factors of successful accurate estimation of glomerular filtration rate in high cardiovascular risk patients.

    Directory of Open Access Journals (Sweden)

    Po-Jen Hsiao

    Full Text Available Estimated glomerular filtration rate (eGFR is used for diagnosis of chronic kidney disease (CKD. The eGFR models based on serum creatinine or cystatin C are used more in clinical practice. Albuminuria and neck circumference are associated with CKD and may have correlations with eGFR.We explored the correlations and modelling formulates among various indicators such as serum creatinine, cystatin C, albuminuria, and neck circumference for eGFR.Cross-sectional study.We reviewed the records of patients with high cardiovascular risk from 2010 to 2011 in Taiwan. 24-hour urine creatinine clearance was used as the standard. We utilized a decision tree to select for variables and adopted a stepwise regression method to generate five models. Model 1 was based on only serum creatinine and was adjusted for age and gender. Model 2 added serum cystatin C, models 3 and 4 added albuminuria and neck circumference, respectively. Model 5 simultaneously added both albuminuria and neck circumference.Total 177 patients were recruited in this study. In model 1, the bias was 2.01 and its precision was 14.04. In model 2, the bias was reduced to 1.86 with a precision of 13.48. The bias of model 3 was 1.49 with a precision of 12.89, and the bias for model 4 was 1.74 with a precision of 12.97. In model 5, the bias could be lower to 1.40 with a precision of 12.53.In this study, the predicting ability of eGFR was improved after the addition of serum cystatin C compared to serum creatinine alone. The bias was more significantly reduced by the calculation of albuminuria. Furthermore, the model generated by combined albuminuria and neck circumference could provide the best eGFR predictions among these five eGFR models. Neck circumference can be investigated potentially in the further studies.

  6. Adaptive estimation of the electromotive force of the lithium-ion battery after current interruption for an accurate state-of-charge and capacity determination

    International Nuclear Information System (INIS)

    Waag, Wladislaw; Sauer, Dirk Uwe

    2013-01-01

    Highlights: • New adaptive approach for the EMF estimation. • The EMF is estimated by observing the voltage change after the current interruption. • The approach enables an accurate SoC and capacity determination. • Real-time capable algorithm. - Abstract: The online estimation of battery states and parameters is one of the challenging tasks when battery is used as a part of the pure electric or hybrid energy system. For the determination of the available energy stored in the battery, the knowledge of the present state-of-charge (SOC) and capacity of the battery is required. For SOC and capacity determination often the estimation of the battery electromotive force (EMF) is employed. The electromotive force can be measured as an open circuit voltage (OCV) of the battery when a significant time has elapsed since the current interruption. This time may take up to some hours for lithium-ion batteries and is needed to eliminate the influence of the diffusion overvoltages. This paper proposes a new approach to estimate the EMF by considering the OCV relaxation process within only some first minutes after the current interruption. The approach is based on an online fitting of an OCV relaxation model to the measured OCV relaxation curve. This model is based on an equivalent circuit consisting of a voltage source (represents the EMF) in series with the parallel connection of the resistance and a constant phase element (CPE). Based on this fitting the model parameters are determined and the EMF is estimated. The application of this method is exemplarily demonstrated for the state-of-charge and capacity estimation of the lithium-ion battery in an electrical vehicle. In the presented example the battery capacity is determined with the maximal inaccuracy of 2% using the EMF estimated at two different levels of state-of-charge. The real-time capability of the proposed algorithm is proven by its implementation on a low-cost 16-bit microcontroller (Infineon XC2287)

  7. How accurate are adolescents in portion-size estimation using the computer tool Young Adolescents' Nutrition Assessment on Computer (YANA-C)?

    Science.gov (United States)

    Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea

    2010-06-01

    Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amounts of ten commonly consumed foods (breakfast cereals, French fries, pasta, rice, apple sauce, carrots and peas, crisps, creamy velouté, red cabbage, and peas). Two procedures were followed: (1) short-term recall: adolescents (n 73) self-served their usual portions of the ten foods and estimated the amounts later the same day; (2) real-time perception: adolescents (n 128) estimated two sets (different portions) of pre-weighed portions displayed near the computer. Self-served portions were, on average, 8 % underestimated; significant underestimates were found for breakfast cereals, French fries, peas, and carrots and peas. Spearman's correlations between the self-served and estimated weights varied between 0.51 and 0.84, with an average of 0.72. The kappa statistics were moderate (>0.4) for all but one item. Pre-weighed portions were, on average, 15 % underestimated, with significant underestimates for fourteen of the twenty portions. Photographs of food items can serve as a good aid in ranking subjects; however, to assess the actual intake at a group level, underestimation must be considered.

  8. Subdwarf ultraviolet excesses and metal abundances

    International Nuclear Information System (INIS)

    Carney, B.W.

    1979-01-01

    The relation between stellar ultraviolet excesses and abundances is reexamined with the aid of new data, and an investigation is made of the accuracy of previous abundance analyses. A high-resolution echellogram of the subdwarf HD 201891 is analyzed to illustrate some of the problems. Generally, the earliest and latest analytical techniques yield consistent results for dwarfs. New UBV data yield normalized ultraviolet excesses, delta (U-B)/sub 0.6/, which are compared to abundances to produce a graphical relation that may be used to estimate [Fe/H] to +- 0.2 dex, given UBV colors accurate to +- 0.01 mag. The relation suggests a possible discontinuity between the halo and old-disk stars

  9. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Energy Technology Data Exchange (ETDEWEB)

    Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)

    2007-10-15

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  10. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Science.gov (United States)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  11. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    International Nuclear Information System (INIS)

    Rybynok, V O; Kyriacou, P A

    2007-01-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  12. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki; Park, Kihong; Alouini, Mohamed-Slim

    2017-01-01

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  13. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki

    2017-07-28

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  14. Determining Optimal New Generation Satellite Derived Metrics for Accurate C3 and C4 Grass Species Aboveground Biomass Estimation in South Africa

    Directory of Open Access Journals (Sweden)

    Cletah Shoko

    2018-04-01

    Full Text Available While satellite data has proved to be a powerful tool in estimating C3 and C4 grass species Aboveground Biomass (AGB, finding an appropriate sensor that can accurately characterize the inherent variations remains a challenge. This limitation has hampered the remote sensing community from continuously and precisely monitoring their productivity. This study assessed the potential of a Sentinel 2 MultiSpectral Instrument, Landsat 8 Operational Land Imager, and WorldView-2 sensors, with improved earth imaging characteristics, in estimating C3 and C4 grasses AGB in the Cathedral Peak, South Africa. Overall, all sensors have shown considerable potential in estimating species AGB; with the use of different combinations of the derived spectral bands and vegetation indices producing better accuracies. However, WorldView-2 derived variables yielded better predictive accuracies (R2 ranging between 0.71 and 0.83; RMSEs between 6.92% and 9.84%, followed by Sentinel 2, with R2 between 0.60 and 0.79; and an RMSE 7.66% and 14.66%. Comparatively, Landsat 8 yielded weaker estimates, with R2 ranging between 0.52 and 0.71 and high RMSEs ranging between 9.07% and 19.88%. In addition, spectral bands located within the red edge (e.g., centered at 0.705 and 0.745 µm for Sentinel 2, SWIR, and NIR, as well as the derived indices, were found to be very important in predicting C3 and C4 AGB from the three sensors. The competence of these bands, especially of the free-available Landsat 8 and Sentinel 2 dataset, was also confirmed from the fusion of the datasets. Most importantly, the three sensors managed to capture and show the spatial variations in AGB for the target C3 and C4 grassland area. This work therefore provides a new horizon and a fundamental step towards C3 and C4 grass productivity monitoring for carbon accounting, forage mapping, and modelling the influence of environmental changes on their productivity.

  15. Accurate estimation of global and regional cardiac function by retrospectively gated multidetector row computed tomography. Comparison with cine magnetic resonance imaging

    International Nuclear Information System (INIS)

    Belge, Benedicte; Pasquet, Agnes; Vanoverschelde, Jean-Louis J.; Coche, Emmanuel; Gerber, Bernhard L.

    2006-01-01

    Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134±51 and 67±56 ml) were similar to those by MR (137±57 and 70±60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55±21 vs. 56±21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3±1.8 vs. 8.8±1.9 mm and 12.7±3.4 vs. 13.3±3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54±30 vs. 51±31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR. (orig.)

  16. Excess Entropy and Diffusivity

    Indian Academy of Sciences (India)

    First page Back Continue Last page Graphics. Excess Entropy and Diffusivity. Excess entropy scaling of diffusivity (Rosenfeld,1977). Analogous relationships also exist for viscosity and thermal conductivity.

  17. Structured settlement annuities, part 2: mortality experience 1967--95 and the estimation of life expectancy in the presence of excess mortality.

    Science.gov (United States)

    Singer, R B; Schmidt, C J

    2000-01-01

    the mortality experience for structured settlement (SS) annuitants issued both standard (Std) and substandard (SStd) has been reported twice previously by the Society of Actuaries (SOA), but the 1995 mortality described here has not previously been published. We describe in detail the 1995 SS mortality, and we also discuss the methodology of calculating life expectancy (e), contrasting three different life-table models. With SOA permission, we present in four tables the unpublished results of its 1995 SS mortality experience by Std and SStd issue, sex, and a combination of 8 age and 6 duration groups. Overall results on mortality expected from the 1983a Individual Annuity Table showed a mortality ratio (MR) of about 140% for Std cases and about 650% for all SStd cases. Life expectancy in a group with excess mortality may be computed by either adding the decimal excess death rate (EDR) to q' for each year of attained age to age 109 or multiplying q' by the decimal MR for each year to age 109. An example is given for men age 60 with localized prostate cancer; annual EDRs from a large published cancer study are used at duration 0-24 years, and the last EDR is assumed constant to age 109. This value of e is compared with e from constant initial values of EDR or MR after the first year. Interrelations of age, sex, e, and EDR and MR are discussed and illustrated with tabular data. It is shown that a constant MR for life-table calculation of e consistently overestimates projected annual mortality at older attained ages and underestimates e. The EDR method, approved for reserve calculations, is also recommended for use in underwriting conversion tables.

  18. Devices used by automated milking systems are similarly accurate in estimating milk yield and in collecting a representative milk sample compared with devices used by farms with conventional milk recording

    NARCIS (Netherlands)

    Kamphuis, Claudia; Dela Rue, B.; Turner, S.A.; Petch, S.

    2015-01-01

    Information on accuracy of milk-sampling devices used on farms with automated milking systems (AMS) is essential for development of milk recording protocols. The hypotheses of this study were (1) devices used by AMS units are similarly accurate in estimating milk yield and in collecting

  19. Abnormal excess heat observed during Mizuno-type experiments

    International Nuclear Information System (INIS)

    Fauvarque, Jean-Francois; Clauzon, Pierre Paul; Lalleve, Gerard Jean-Michel

    2006-01-01

    A simple calorimeter has been designed that works at constant temperature; that of boiling water. Heat Losses can be estimated accurately with an ohmic heater. As expected, losses are independent of the electric power input to the heater and the amount of evaporated water is linearly dependant on the power input. The device has been used to determine the heating power of a plasma electrolysis (the Ohmori-Mizuno experiment). We confirm that in this experiment, the heat output from electrolysis is greater than the electrical power input. The excess energy increases as the electrolysis voltage is increased from 200 up to 350 V (400 V input). The excess energy may be as high as 120 W. (author)

  20. Measuring excess capital capacity in agricultural production

    NARCIS (Netherlands)

    Zhengfei, G.; Kumbhakar, S.C.; Myers, R.J.; Oude Lansink, A.G.J.M.

    2009-01-01

    We introduce the concept "excess capital capacity" and employ a stochastic input requirement frontier to measure excess capital capacity in agricultural production. We also propose a two-step estimation method that allows endogenous regressors in stochastic frontier models. The first step uses

  1. Publication Bias Currently Makes an Accurate Estimate of the Benefits of Enrichment Programs Difficult: A Postmortem of Two Meta-Analyses Using Statistical Power Analysis

    Science.gov (United States)

    Warne, Russell T.

    2016-01-01

    Recently Kim (2016) published a meta-analysis on the effects of enrichment programs for gifted students. She found that these programs produced substantial effects for academic achievement (g = 0.96) and socioemotional outcomes (g = 0.55). However, given current theory and empirical research these estimates of the benefits of enrichment programs…

  2. A lake classification concept for a more accurate global estimate of the dissolved inorganic carbon export from terrestrial ecosystems to inland waters

    Science.gov (United States)

    Engel, Fabian; Farrell, Kaitlin J.; McCullough, Ian M.; Scordo, Facundo; Denfeld, Blaize A.; Dugan, Hilary A.; de Eyto, Elvira; Hanson, Paul C.; McClure, Ryan P.; Nõges, Peeter; Nõges, Tiina; Ryder, Elizabeth; Weathers, Kathleen C.; Weyhenmeyer, Gesa A.

    2018-04-01

    The magnitude of lateral dissolved inorganic carbon (DIC) export from terrestrial ecosystems to inland waters strongly influences the estimate of the global terrestrial carbon dioxide (CO2) sink. At present, no reliable number of this export is available, and the few studies estimating the lateral DIC export assume that all lakes on Earth function similarly. However, lakes can function along a continuum from passive carbon transporters (passive open channels) to highly active carbon transformers with efficient in-lake CO2 production and loss. We developed and applied a conceptual model to demonstrate how the assumed function of lakes in carbon cycling can affect calculations of the global lateral DIC export from terrestrial ecosystems to inland waters. Using global data on in-lake CO2 production by mineralization as well as CO2 loss by emission, primary production, and carbonate precipitation in lakes, we estimated that the global lateral DIC export can lie within the range of {0.70}_{-0.31}^{+0.27} to {1.52}_{-0.90}^{+1.09} Pg C yr-1 depending on the assumed function of lakes. Thus, the considered lake function has a large effect on the calculated lateral DIC export from terrestrial ecosystems to inland waters. We conclude that more robust estimates of CO2 sinks and sources will require the classification of lakes into their predominant function. This functional lake classification concept becomes particularly important for the estimation of future CO2 sinks and sources, since in-lake carbon transformation is predicted to be altered with climate change.

  3. On the concept of critical surface excess of micellization.

    Science.gov (United States)

    Talens-Alesson, Federico I

    2010-11-16

    The critical surface excess of micellization (CSEM) should be regarded as the critical condition for micellization of ionic surfactants instead of the critical micelle concentration (CMC). There is a correspondence between the surface excesses Γ of anionic, cationic, and zwitterionic surfactants at their CMCs, which would be the CSEM values, and the critical association distance for ionic pair association calculated using Bjerrum's correlation. Further support to this concept is given by an accurate method for the prediction of the relative binding of alkali cations onto dodecylsulfate (NaDS) micelles. This method uses a relative binding strength parameter calculated from the values of surface excess Γ at the CMC of the alkali dodecylsulfates. This links both the binding of a given cation onto micelles and the onset for micellization of its surfactant salt. The CSEM concept implies that micelles form at the air-water interface unless another surface with greater affinity for micelles exists. The process would start when surfactant monomers are close enough to each other for ionic pairing with counterions and the subsequent assembly of these pairs becomes unavoidable. This would explain why the surface excess Γ values of different surfactants are more similar than their CMCs: the latter are just the bulk phase concentrations in equilibrium with chemicals with different hydrophobicity. An intriguing implication is that CSEM values may be used to calculate the actual critical distances of ionic pair formation for different cations, replacing Bjerrum's estimates, which only discriminate by the magnitude of the charge.

  4. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    Science.gov (United States)

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. Copyright © 2016. Published by Elsevier Ltd.

  5. Estimating the four-factor product (ε p Pfnl Ptnl) for the accurate calculation of xenon and samarium reactivities in the Syrian Miniature Neutron Source Reactor

    International Nuclear Information System (INIS)

    Khattab, K.

    2007-01-01

    The modified 135 Xe equilibrium reactivity in the Syrian Miniature Neutron Source Reactor (MNSR) was calculated first by using the WIMSD4 and CITATION codes to estimate the four-factor product (ε p P f nl P t nl). Then, precise calculations of 135 Xe and 149 Sm concentrations and reactivities were carried out and compared during the reactor operation time and after shutdown. It was found that the 135 Xe and 149 Sm reactivities did not reach their equilibrium reactivities during the daily operating time of the reactor. The 149 Sm reactivities could be neglected compared to 135 Xe reactivities during the reactor operating time and after shutdown. (author)

  6. Estimating the four-factor product (ε p Pfnl Ptnl) for the accurate calculation of xenon and samarium reactivities in the Syrian Miniature Neutron Source Reactor

    International Nuclear Information System (INIS)

    Khattab, K.

    2007-01-01

    The modified 135 Xe equilibrium reactivity in the Syrian Miniature Neutron Source Reactor (MNSR) was calculated first by using the WIMSD4 and CITATION codes to estimate the four-factor product (ε p P fnl P tnl ). Then, precise calculations of 135 Xe and 149 Sm concentrations and reactivities were carried out and compared during the reactor operation time and after shutdown. It was found that the 135 Xe and 149 Sm reactivities did not reach their equilibrium reactivities during the daily operating time of the reactor. The 149 Sm reactivities could be neglected compared to 135 Xe reactivities during the reactor operating time and after shutdown. (author)

  7. Two-compartment, two-sample technique for accurate estimation of effective renal plasma flow: Theoretical development and comparison with other methods

    International Nuclear Information System (INIS)

    Lear, J.L.; Feyerabend, A.; Gregory, C.

    1989-01-01

    Discordance between effective renal plasma flow (ERPF) measurements from radionuclide techniques that use single versus multiple plasma samples was investigated. In particular, the authors determined whether effects of variations in distribution volume (Vd) of iodine-131 iodohippurate on measurement of ERPF could be ignored, an assumption implicit in the single-sample technique. The influence of Vd on ERPF was found to be significant, a factor indicating an important and previously unappreciated source of error in the single-sample technique. Therefore, a new two-compartment, two-plasma-sample technique was developed on the basis of the observations that while variations in Vd occur from patient to patient, the relationship between intravascular and extravascular components of Vd and the rate of iodohippurate exchange between the components are stable throughout a wide range of physiologic and pathologic conditions. The new technique was applied in a series of 30 studies in 19 patients. Results were compared with those achieved with the reference, single-sample, and slope-intercept techniques. The new two-compartment, two-sample technique yielded estimates of ERPF that more closely agreed with the reference multiple-sample method than either the single-sample or slope-intercept techniques

  8. An accurate density functional theory based estimation of pK(a) values of polar residues combined with experimental data: from amino acids to minimal proteins.

    Science.gov (United States)

    Matsui, Toru; Baba, Takeshi; Kamiya, Katsumasa; Shigeta, Yasuteru

    2012-03-28

    We report a scheme for estimating the acid dissociation constant (pK(a)) based on quantum-chemical calculations combined with a polarizable continuum model, where a parameter is determined for small reference molecules. We calculated the pK(a) values of variously sized molecules ranging from an amino acid to a protein consisting of 300 atoms. This scheme enabled us to derive a semiquantitative pK(a) value of specific chemical groups and discuss the influence of the surroundings on the pK(a) values. As applications, we have derived the pK(a) value of the side chain of an amino acid and almost reproduced the experimental value. By using our computing schemes, we showed the influence of hydrogen bonds on the pK(a) values in the case of tripeptides, which decreases the pK(a) value by 3.0 units for serine in comparison with those of the corresponding monopeptides. Finally, with some assumptions, we derived the pK(a) values of tyrosines and serines in chignolin and a tryptophan cage. We obtained quite different pK(a) values of adjacent serines in the tryptophan cage; the pK(a) value of the OH group of Ser13 exposed to bulk water is 14.69, whereas that of Ser14 not exposed to bulk water is 20.80 because of the internal hydrogen bonds.

  9. Hyperhidrosis (Excessive Sweating)

    Science.gov (United States)

    ... who have this type are otherwise healthy. In medical terminology, the word “primary” means that the cause is not another medical condition. Secondary hyperhidrosis In medical terminology, “secondary” means that the excessive sweating (hyperhidrosis) has ...

  10. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  11. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-01-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear–quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18–30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8–30.9 Gy) and 22.0 Gy (range, 20.2–26.6 Gy), respectively. By use of conventional values for α/β, volume parameter n, 50% complication probability dose TD 50 , and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of α/β and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of α/β and n yielded better predictions (0.7 complications), with n = 0.023 and α/β = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high α/β value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models traditionally used to estimate spinal cord NTCP

  12. Superconductors with excess quasiparticles

    International Nuclear Information System (INIS)

    Elesin, V.F.; Kopaev, Y.V.

    1981-01-01

    This review presents a systematic kinetic theory of nonequilibrium phenomena in superconductors with excess quasiparticles created by electromagnetic or tunnel injection. The energy distributions of excess quasiparticles and of nonequilibrium phonons, dependence of the order parameter on the power and frequency (or intensity) of the electromagnetic field, magnetic properties of nonequilibrium superconductors, I-V curves of superconductor-insulator-superconductor junctions, and other properties are described in detail. The stability of superconducting states far from thermodynamic equilibrium is investigated and it is shown that characteristic instabilities leading to the formation of nonuniform states of a new type or phase transitions of the first kind are inherent to superconductors with excess quasiparticles. The results are compared with experimental data

  13. Excess wind power

    DEFF Research Database (Denmark)

    Østergaard, Poul Alberg

    2005-01-01

    Expansion of wind power is an important element in Danish climate change abatement policy. Starting from a high penetration of approx 20% however, momentary excess production will become an important issue in the future. Through energy systems analyses using the EnergyPLAN model and economic...... analyses it is analysed how excess productions are better utilised; through conversion into hydrogen of through expansion of export connections thereby enabling sales. The results demonstrate that particularly hydrogen production is unviable under current costs but transmission expansion could...

  14. The High Price of Excessive Alcohol Consumption

    Centers for Disease Control (CDC) Podcasts

    This podcast is based on the October 2011 release of a report estimating the economic cost of excessive drinking. Excessive alcohol consumption cost the U. S. $223.5 billion in 2006, or about $1.90 per drink. Over three-quarters (76%) of these costs were due to binge drinking, defined as consuming 4 or more alcoholic beverages per occasion for women or 5 or more drinks per occasion for men.

  15. Symptom profiles of subsyndromal depression in disease clusters of diabetes, excess weight, and progressive cerebrovascular conditions: a promising new type of finding from a reliable innovation to estimate exhaustively specified multiple indicators–multiple causes (MIMIC models

    Directory of Open Access Journals (Sweden)

    Francoeur RB

    2016-12-01

    Full Text Available Richard B Francoeur School of Social Work, Adelphi University, Garden City, NY, USA Abstract: Addressing subsyndromal depression in cerebrovascular conditions, diabetes, and obesity reduces morbidity and risk of major depression. However, depression may be masked because self-reported symptoms may not reveal dysphoric (sad mood. In this study, the first wave (2,812 elders from the New Haven Epidemiological Study of the Elderly (EPESE was used. These population-weighted data combined a stratified, systematic, clustered random sample from independent residences and a census of senior housing. Physical conditions included progressive cerebrovascular disease (CVD; hypertension, silent CVD, stroke, and vascular cognitive impairment [VCI] and co-occurring excess weight and/or diabetes. These conditions and interactions (clusters simultaneously predicted 20 depression items and a latent trait of depression in participants with subsyndromal (including subthreshold depression (11≤ Center for Epidemiologic Studies Depression Scale [CES-D] score ≤27. The option for maximum likelihood estimation with standard errors that are robust to non-normality and non-independence in complex random samples (MLR in Mplus and an innovation created by the author were used for estimating unbiased effects from latent trait models with exhaustive specification. Symptom profiles reveal masked depression in 1 older males, related to the metabolic syndrome (hypertension–overweight–diabetes; silent CVD–overweight; and silent CVD–diabetes and 2 older females or the full sample, related to several diabetes and/or overweight clusters that involve stroke or VCI. Several other disease clusters are equivocal regarding masked depression; a couple do emphasize dysphoric mood. Replicating findings could identify subgroups for cost-effective screening of subsyndromal depression. Keywords: depression, diabetes, overweight, cerebrovascular disease, hypertension, metabolic

  16. Disposition of excess material

    International Nuclear Information System (INIS)

    Hall, J.C.

    1978-01-01

    This paper reviews briefly the means available to an enrichment customer to dispose of excess material scheduled for delivery under a fixed-commitment contract, other than through termination of the related separative work. The methods are as follows: (1) sales; (2) use in facilities covered by other DOE contracts; and (3) assignment

  17. Excessive crying in infants

    Directory of Open Access Journals (Sweden)

    Ricardo Halpern

    2016-05-01

    Conclusion: Excessive crying in the early months is a prevalent symptom; the pediatrician's attention is necessary to understand and adequately manage the problem and offer support to exhausted parents. The prescription of drugs of questionable action and with potential side effects is not a recommended treatment, except in extreme situations. The effectiveness of dietary treatments and use of probiotics still require confirmation. There is incomplete evidence regarding alternative treatments such as manipulation techniques, acupuncture, and use of the herbal supplements and behavioral interventions.

  18. ACTIVATION PARAMETERS AND EXCESS THERMODYANAMIC ...

    African Journals Online (AJOL)

    Applying these data, viscosity-B-coefficients, activation parameters (Δμ10≠) and (Δμ20≠) and excess thermodynamic functions, viz., excess molar volume (VE), excess viscosity, ηE and excess molar free energy of activation of flow, (GE) were calculated. The value of interaction parameter, d, of Grunberg and Nissan ...

  19. Limiting law excess sum rule for polyelectrolytes.

    Science.gov (United States)

    Landy, Jonathan; Lee, YongJin; Jho, YongSeok

    2013-11-01

    We revisit the mean-field limiting law screening excess sum rule that holds for rodlike polyelectrolytes. We present an efficient derivation of this law that clarifies its region of applicability: The law holds in the limit of small polymer radius, measured relative to the Debye screening length. From the limiting law, we determine the individual ion excess values for single-salt electrolytes. We also consider the mean-field excess sum away from the limiting region, and we relate this quantity to the osmotic pressure of a dilute polyelectrolyte solution. Finally, we consider numerical simulations of many-body polymer-electrolyte solutions. We conclude that the limiting law often accurately describes the screening of physical charged polymers of interest, such as extended DNA.

  20. CASPER: Embedding Power Estimation and Hardware-Controlled Power Management in a Cycle-Accurate Micro-Architecture Simulation Platform for Many-Core Multi-Threading Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Arun Ravindran

    2012-02-01

    Full Text Available Despite the promising performance improvement observed in emerging many-core architectures in high performance processors, high power consumption prohibitively affects their use and marketability in the low-energy sectors, such as embedded processors, network processors and application specific instruction processors (ASIPs. While most chip architects design power-efficient processors by finding an optimal power-performance balance in their design, some use sophisticated on-chip autonomous power management units, which dynamically reduce the voltage or frequencies of idle cores and hence extend battery life and reduce operating costs. For large scale designs of many-core processors, a holistic approach integrating both these techniques at different levels of abstraction can potentially achieve maximal power savings. In this paper we present CASPER, a robust instruction trace driven cycle-accurate many-core multi-threading micro-architecture simulation platform where we have incorporated power estimation models of a wide variety of tunable many-core micro-architectural design parameters, thus enabling processor architects to explore a sufficiently large design space and achieve power-efficient designs. Additionally CASPER is designed to accommodate cycle-accurate models of hardware controlled power management units, enabling architects to experiment with and evaluate different autonomous power-saving mechanisms to study the run-time power-performance trade-offs in embedded many-core processors. We have implemented two such techniques in CASPER–Chipwide Dynamic Voltage and Frequency Scaling, and Performance Aware Core-Specific Frequency Scaling, which show average power savings of 35.9% and 26.2% on a baseline 4-core SPARC based architecture respectively. This power saving data accounts for the power consumption of the power management units themselves. The CASPER simulation platform also provides users with complete support of SPARCV9

  1. Excessive crying in infants

    Directory of Open Access Journals (Sweden)

    Ricardo Halpern

    2016-05-01

    Full Text Available Objective: Review the literature on excessive crying in young infants, also known as infantile colic, and its effects on family dynamics, its pathophysiology, and new treatment interventions. Data source: The literature review was carried out in the Medline, PsycINFO, LILACS, SciELO, and Cochrane Library databases, using the terms “excessive crying,” and “infantile colic,” as well technical books and technical reports on child development, selecting the most relevant articles on the subject, with emphasis on recent literature published in the last five years. Summary of the findings: Excessive crying is a common symptom in the first 3 months of life and leads to approximately 20% of pediatric consultations. Different prevalence rates of excessive crying have been reported, ranging from 14% to approximately 30% in infants up to 3 months of age. There is evidence linking excessive crying early in life with adaptive problems in the preschool period, as well as with early weaning, maternal anxiety and depression, attention deficit hyperactivity disorder, and other behavioral problems. Several pathophysiological mechanisms can explain these symptoms, such as circadian rhythm alterations, central nervous system immaturity, and alterations in the intestinal microbiota. Several treatment alternatives have been described, including behavioral measures, manipulation techniques, use of medication, and acupuncture, with controversial results and effectiveness. Conclusion: Excessive crying in the early months is a prevalent symptom; the pediatrician's attention is necessary to understand and adequately manage the problem and offer support to exhausted parents. The prescription of drugs of questionable action and with potential side effects is not a recommended treatment, except in extreme situations. The effectiveness of dietary treatments and use of probiotics still require confirmation. There is incomplete evidence regarding alternative treatments

  2. The Virtual Diphoton Excess

    CERN Document Server

    Stolarski, Daniel

    2016-01-01

    Interpreting the excesses around 750 GeV in the diphoton spectra to be the signal of a new heavy scalar decaying to photons, we point out the possibility of looking for correlated signals with virtual photons. In particular, we emphasize that the effective operator that generates the diphoton decay will also generate decays to two leptons and a photon, as well as to four leptons, independently of the new resonance couplings to $Z\\gamma$ and $ZZ$. Depending on the relative sizes of these effective couplings, we show that the virtual diphoton component can make up a sizable, and sometimes dominant, contribution to the total $2\\ell \\gamma$ and $4\\ell$ partial widths. We also discuss modifications to current experimental cuts in order to maximize the sensitivity to these virtual photon effects. Finally, we briefly comment on prospects for channels involving other Standard Model fermions as well as more exotic decay possibilities of the putative resonance.

  3. Abundance, Excess, Waste

    Directory of Open Access Journals (Sweden)

    Rox De Luca

    2016-02-01

    Her recent work focuses on the concepts of abundance, excess and waste. These concerns translate directly into vibrant and colourful garlands that she constructs from discarded plastics collected on Bondi Beach where she lives. The process of collecting is fastidious, as is the process of sorting and grading the plastics by colour and size. This initial gathering and sorting process is followed by threading the components onto strings of wire. When completed, these assemblages stand in stark contrast to the ease of disposability associated with the materials that arrive on the shoreline as evidence of our collective human neglect and destruction of the environment around us. The contrast is heightened by the fact that the constructed garlands embody the paradoxical beauty of our plastic waste byproducts, while also evoking the ways by which those byproducts similarly accumulate in randomly assorted patterns across the oceans and beaches of the planet.

  4. Topiramate Induced Excessive Sialorrhea

    Directory of Open Access Journals (Sweden)

    Ersel Dag

    2015-11-01

    Full Text Available It is well-known that drugs such as clozapine and lithium can cause sialorrhea. On the other hand, topiramate has not been reported to induce sialorrhea. We report a case of a patient aged 26 who was given antiepileptic and antipsychotic drugs due to severe mental retardation and intractable epilepsy and developed excessive sialorrhea complaint after the addition of topiramate for the control of seizures. His complaints continued for 1,5 years and ended after giving up topiramate. We presented this case since it was a rare sialorrhea case induced by topiramate. Clinicians should be aware of the possibility of sialorrhea development which causes serious hygiene and social problems when they want to give topiramate to the patients using multiple drugs.

  5. The High Price of Excessive Alcohol Consumption

    Centers for Disease Control (CDC) Podcasts

    2011-10-17

    This podcast is based on the October 2011 release of a report estimating the economic cost of excessive drinking. Excessive alcohol consumption cost the U. S. $223.5 billion in 2006, or about $1.90 per drink. Over three-quarters (76%) of these costs were due to binge drinking, defined as consuming 4 or more alcoholic beverages per occasion for women or 5 or more drinks per occasion for men.  Created: 10/17/2011 by National Center for Chronic Disease Prevention and Health Promotion.   Date Released: 10/17/2011.

  6. On the excess energy of nonequilibrium plasma

    International Nuclear Information System (INIS)

    Timofeev, A. V.

    2012-01-01

    The energy that can be released in plasma due to the onset of instability (the excess plasma energy) is estimated. Three potentially unstable plasma states are considered, namely, plasma with an anisotropic Maxwellian velocity distribution of plasma particles, plasma with a two-beam velocity distribution, and an inhomogeneous plasma in a magnetic field with a local Maxwellian velocity distribution. The excess energy can serve as a measure of the degree to which plasma is nonequilibrium. In particular, this quantity can be used to compare plasmas in different nonequilibrium states.

  7. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  8. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    International Nuclear Information System (INIS)

    Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa

    2011-01-01

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  9. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  10. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  11. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  12. Inland excess water mapping using hyperspectral imagery

    Directory of Open Access Journals (Sweden)

    Csendes Bálint

    2016-01-01

    Full Text Available Hyperspectral imaging combined with the potentials of airborne scanning is a powerful tool to monitor environmental processes. The aim of this research was to use high resolution remotely sensed data to map the spatial extent of inland excess water patches in a Hungarian study area that is known for its oil and gas production facilities. Periodic floodings show high spatial and temporal variability, nevertheless, former studies have proven that the affected soil surfaces can be accurately identified. Besides separability measurements, we performed spectral angle classification, which gave a result of 85% overall accuracy and we also compared the generated land cover map with LIDAR elevation data.

  13. Excess Molar Volumes and Viscosities of Binary Mixture of Diethyl Carbonate+Ethanol at Different Temperatures

    Institute of Scientific and Technical Information of China (English)

    MA Peisheng; LI Nannan

    2005-01-01

    The purpose of this work was to report excess molar volumes and dynamic viscosities of the binary mixture of diethyl carbonate (DEC)+ethanol. Densities and viscosities of the binary mixture of DEC+ethanol at temperatures 293.15 K-343.15 K and atmospheric pressure were determined over the entire composition range. Densities of the binary mixture of DEC+ethanol were measured by using a vibrating U-shaped sample tube densimeter. Viscosities were determined by using Ubbelohde suspended-level viscometer. Densities are accurate to 1.0×10-5 g·cm-3, and viscosities are reproducible within ±0.003 mPa·s. From these data, excess molar volumes and deviations in viscosity were calculated. Positive excess molar volumes and negative deviations in viscosity for DEC+ethanol system are due to the strong specific interactions.All excess molar vo-lumes and deviations in viscosity fit to the Redlich-Kister polynomial equation.The fitting parameters were presented,and the average deviations and standard deviations were also calculated.The errors of correlation are very small.It proves that it is valuable for estimating densities and viscosities of the binary mixture by the correlated equation.

  14. Does excessive pronation cause pain?

    DEFF Research Database (Denmark)

    Olesen, C G; Nielsen, Rasmus Gottschalk N; Rathleff, Michael Skovdal

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...

  15. Integrating uncertainty propagation in GNSS radio occultation retrieval: from excess phase to atmospheric bending angle profiles

    Directory of Open Access Journals (Sweden)

    J. Schwarz

    2018-05-01

    Full Text Available Global Navigation Satellite System (GNSS radio occultation (RO observations are highly accurate, long-term stable data sets and are globally available as a continuous record from 2001. Essential climate variables for the thermodynamic state of the free atmosphere – such as pressure, temperature, and tropospheric water vapor profiles (involving background information – can be derived from these records, which therefore have the potential to serve as climate benchmark data. However, to exploit this potential, atmospheric profile retrievals need to be very accurate and the remaining uncertainties quantified and traced throughout the retrieval chain from raw observations to essential climate variables. The new Reference Occultation Processing System (rOPS at the Wegener Center aims to deliver such an accurate RO retrieval chain with integrated uncertainty propagation. Here we introduce and demonstrate the algorithms implemented in the rOPS for uncertainty propagation from excess phase to atmospheric bending angle profiles, for estimated systematic and random uncertainties, including vertical error correlations and resolution estimates. We estimated systematic uncertainty profiles with the same operators as used for the basic state profiles retrieval. The random uncertainty is traced through covariance propagation and validated using Monte Carlo ensemble methods. The algorithm performance is demonstrated using test day ensembles of simulated data as well as real RO event data from the satellite missions CHAllenging Minisatellite Payload (CHAMP; Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC; and Meteorological Operational Satellite A (MetOp. The results of the Monte Carlo validation show that our covariance propagation delivers correct uncertainty quantification from excess phase to bending angle profiles. The results from the real RO event ensembles demonstrate that the new uncertainty estimation chain performs

  16. Integrating uncertainty propagation in GNSS radio occultation retrieval: from excess phase to atmospheric bending angle profiles

    Science.gov (United States)

    Schwarz, Jakob; Kirchengast, Gottfried; Schwaerz, Marc

    2018-05-01

    Global Navigation Satellite System (GNSS) radio occultation (RO) observations are highly accurate, long-term stable data sets and are globally available as a continuous record from 2001. Essential climate variables for the thermodynamic state of the free atmosphere - such as pressure, temperature, and tropospheric water vapor profiles (involving background information) - can be derived from these records, which therefore have the potential to serve as climate benchmark data. However, to exploit this potential, atmospheric profile retrievals need to be very accurate and the remaining uncertainties quantified and traced throughout the retrieval chain from raw observations to essential climate variables. The new Reference Occultation Processing System (rOPS) at the Wegener Center aims to deliver such an accurate RO retrieval chain with integrated uncertainty propagation. Here we introduce and demonstrate the algorithms implemented in the rOPS for uncertainty propagation from excess phase to atmospheric bending angle profiles, for estimated systematic and random uncertainties, including vertical error correlations and resolution estimates. We estimated systematic uncertainty profiles with the same operators as used for the basic state profiles retrieval. The random uncertainty is traced through covariance propagation and validated using Monte Carlo ensemble methods. The algorithm performance is demonstrated using test day ensembles of simulated data as well as real RO event data from the satellite missions CHAllenging Minisatellite Payload (CHAMP); Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC); and Meteorological Operational Satellite A (MetOp). The results of the Monte Carlo validation show that our covariance propagation delivers correct uncertainty quantification from excess phase to bending angle profiles. The results from the real RO event ensembles demonstrate that the new uncertainty estimation chain performs robustly. Together

  17. Does Excessive Pronation Cause Pain?

    DEFF Research Database (Denmark)

    Mølgaard, Carsten Møller; Olesen Gammelgaard, Christian; Nielsen, R. G.

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist...... of antipronation shoes or insoles, which latest was studied by Kulce DG., et al (2007). So far there have been no randomized controlled studies showing methods that the effect of this treatment has not been documented. Therefore the authors can measure the effect of treatments with insoles. Some of the excessive...

  18. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  19. Understanding Excess Emissions from Industrial Facilities: Evidence from Texas.

    Science.gov (United States)

    Zirogiannis, Nikolaos; Hollingsworth, Alex J; Konisky, David M

    2018-03-06

    We analyze excess emissions from industrial facilities in Texas using data from the Texas Commission on Environmental Quality. Emissions are characterized as excess if they are beyond a facility's permitted levels and if they occur during startups, shutdowns, or malfunctions. We provide summary data on both the pollutants most often emitted as excess emissions and the industrial sectors and facilities responsible for those emissions. Excess emissions often represent a substantial share of a facility's routine (or permitted) emissions. We find that while excess emissions events are frequent, the majority of excess emissions are emitted by the largest events. That is, the sum of emissions in the 96-100th percentile is often several orders of magnitude larger than the remaining excess emissions (i.e., the sum of emissions below the 95th percentile). Thus, the majority of events emit a small amount of pollution relative to the total amount emitted. In addition, a small group of high emitting facilities in the most polluting industrial sectors are responsible for the vast majority of excess emissions. Using an integrated assessment model, we estimate that the health damages in Texas from excess emissions are approximately $150 million annually.

  20. 24 CFR 236.60 - Excess Income.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Excess Income. 236.60 Section 236... § 236.60 Excess Income. (a) Definition. Excess Income consists of cash collected as rent from the... Rent. The unit-by-unit requirement necessitates that, if a unit has Excess Income, the Excess Income...

  1. Excessive or unwanted hair in women

    Science.gov (United States)

    Hypertrichosis; Hirsutism; Hair - excessive (women); Excessive hair in women; Hair - women - excessive or unwanted ... Women normally produce low levels of male hormones (androgens). If your body makes too much of this ...

  2. Excessive masturbation after epilepsy surgery.

    Science.gov (United States)

    Ozmen, Mine; Erdogan, Ayten; Duvenci, Sirin; Ozyurt, Emin; Ozkara, Cigdem

    2004-02-01

    Sexual behavior changes as well as depression, anxiety, and organic mood/personality disorders have been reported in temporal lobe epilepsy (TLE) patients before and after epilepsy surgery. The authors describe a 14-year-old girl with symptoms of excessive masturbation in inappropriate places, social withdrawal, irritability, aggressive behavior, and crying spells after selective amygdalohippocampectomy for medically intractable TLE with hippocampal sclerosis. Since the family members felt extremely embarrassed, they were upset and angry with the patient which, in turn, increased her depressive symptoms. Both her excessive masturbation behavior and depressive symptoms remitted within 2 months of psychoeducative intervention and treatment with citalopram 20mg/day. Excessive masturbation is proposed to be related to the psychosocial changes due to seizure-free status after surgery as well as other possible mechanisms such as Kluver-Bucy syndrome features and neurophysiologic changes associated with the cessation of epileptic discharges. This case demonstrates that psychiatric problems and sexual changes encountered after epilepsy surgery are possibly multifactorial and in adolescence hypersexuality may be manifested as excessive masturbation behavior.

  3. Production of ethanol from excess ethylene

    DEFF Research Database (Denmark)

    Kadhim, Adam S.; Carlsen, Kim B.; Bisgaard, Thomas

    2012-01-01

    will focus on the synthetic method, which employs direct hydration of ethylene. A conceptual process design of an ethyl alcohol producing plant is performed in a MSc-level course on Process Design at the Department of Chemical and Biochemical Engineering at DTU. In the designed process, 190 proof ethyl...... alcohol (azeotropic mixture) is produced from excess ethylene containing propylene and methane as impurities. The design work is based on a systematic approach consisting of 12 tasks performed in a specified hierarchy. According to this 12-tasks design procedure, information about the product and process...... of the designed process. The resulting design utilizes 75 million kg/year ethylene feed in order to obtain an ethyl alcohol production of 90.5 million kg/year. The total capital investment has been estimated to 43 million USD and the total product cost without depreciation estimated to 58.5 million USD...

  4. The Selvester QRS Score is more accurate than Q waves and fragmented QRS complexes using the Mason-Likar configuration in estimating infarct volume in patients with ischemic cardiomyopathy.

    Science.gov (United States)

    Carey, Mary G; Luisi, Andrew J; Baldwa, Sunil; Al-Zaiti, Salah; Veneziano, Marc J; deKemp, Robert A; Canty, John M; Fallavollita, James A

    2010-01-01

    Infarct volume independently predicts cardiovascular events. Fragmented QRS complexes (fQRS) may complement Q waves for identifying infarction; however, their utility in advanced coronary disease is unknown. We tested whether fQRS could improve the electrocardiographic prediction of infarct volume by positron emission tomography in 138 patients with ischemic cardiomyopathy (ejection fraction, 0.27 +/- 0.09). Indices of infarction (pathologic Q waves, fQRS, and Selvester QRS Score) were analyzed by blinded observers. In patients with QRS duration less than 120 milliseconds, number of leads with pathologic Q waves (mean, 1.6 +/- 1.7) correlated weakly with infarct volume (r = 0.30, P wave prediction of infarct volume; but Selvester Score was more accurate. Published by Elsevier Inc.

  5. Excess Early Mortality in Schizophrenia

    DEFF Research Database (Denmark)

    Laursen, Thomas Munk; Nordentoft, Merete; Mortensen, Preben Bo

    2014-01-01

    Schizophrenia is often referred to as one of the most severe mental disorders, primarily because of the very high mortality rates of those with the disorder. This article reviews the literature on excess early mortality in persons with schizophrenia and suggests reasons for the high mortality...... as well as possible ways to reduce it. Persons with schizophrenia have an exceptionally short life expectancy. High mortality is found in all age groups, resulting in a life expectancy of approximately 20 years below that of the general population. Evidence suggests that persons with schizophrenia may...... not have seen the same improvement in life expectancy as the general population during the past decades. Thus, the mortality gap not only persists but may actually have increased. The most urgent research agenda concerns primary candidates for modifiable risk factors contributing to this excess mortality...

  6. Severe rhabdomyolysis after excessive bodybuilding.

    Science.gov (United States)

    Finsterer, J; Zuntner, G; Fuchs, M; Weinberger, A

    2007-12-01

    A 46-year-old male subject performed excessive physical exertion during 4-6 h in a studio for body builders during 5 days. He was not practicing sport prior to this training and denied the use of any aiding substances. Despite muscle aching already after 1 day, he continued the exercises. After the last day, he recognized tiredness and cessation of urine production. Two days after discontinuation of the training, a Herpes simplex infection occurred. Because of acute renal failure, he required hemodialysis. There were absent tendon reflexes and creatine kinase (CK) values up to 208 274 U/L (normal: <170 U/L). After 2 weeks, CK had almost normalized and, after 4 weeks, hemodialysis was discontinued. Excessive muscle training may result in severe, hemodialysis-dependent rhabdomyolysis. Triggering factors may be prior low fitness level, viral infection, or subclinical metabolic myopathy.

  7. Verification of excess defense material

    International Nuclear Information System (INIS)

    Fearey, B.L.; Pilat, J.F.; Eccleston, G.W.; Nicholas, N.J.; Tape, J.W.

    1997-01-01

    The international community in the post-Cold War period has expressed an interest in the International Atomic Energy Agency (IAEA) using its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring excess materials, which include both classified and unclassified materials. Although the IAEA has suggested the need to address inspections of both types of materials, the most troublesome and potentially difficult problems involve approaches to the inspection of classified materials. The key issue for placing classified nuclear components and materials under IAEA safeguards is the conflict between these traditional IAEA materials accounting procedures and the US classification laws and nonproliferation policy designed to prevent the disclosure of critical weapon-design information. Possible verification approaches to classified excess defense materials could be based on item accountancy, attributes measurements, and containment and surveillance. Such approaches are not wholly new; in fact, they are quite well established for certain unclassified materials. Such concepts may be applicable to classified items, but the precise approaches have yet to be identified, fully tested, or evaluated for technical and political feasibility, or for their possible acceptability in an international inspection regime. Substantial work remains in these areas. This paper examines many of the challenges presented by international inspections of classified materials

  8. Excess water dynamics in hydrotalcite: QENS study

    Indian Academy of Sciences (India)

    dynamics of excess water in hydrotalcite sample with varied content of excess water are reported. Translational motion of excess water can be best described by random transla- tional jump diffusion model. The observed increase in translational diffusivity with increase in the amount of excess water is attributed to the ...

  9. 34 CFR 300.16 - Excess costs.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Excess costs. 300.16 Section 300.16 Education... DISABILITIES General Definitions Used in This Part § 300.16 Excess costs. Excess costs means those costs that... for an example of how excess costs must be calculated.) (Authority: 20 U.S.C. 1401(8)) ...

  10. Study of accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  11. Does the Spectrum model accurately predict trends in adult mortality? Evaluation of model estimates using empirical data from a rural HIV community cohort study in north-western Tanzania

    Directory of Open Access Journals (Sweden)

    Denna Michael

    2014-01-01

    Full Text Available Introduction: Spectrum epidemiological models are used by UNAIDS to provide global, regional and national HIV estimates and projections, which are then used for evidence-based health planning for HIV services. However, there are no validations of the Spectrum model against empirical serological and mortality data from populations in sub-Saharan Africa. Methods: Serologic, demographic and verbal autopsy data have been regularly collected among over 30,000 residents in north-western Tanzania since 1994. Five-year age-specific mortality rates (ASMRs per 1,000 person years and the probability of dying between 15 and 60 years of age (45Q15, were calculated and compared with the Spectrum model outputs. Mortality trends by HIV status are shown for periods before the introduction of antiretroviral therapy (1994–1999, 2000–2005 and the first 5 years afterwards (2005–2009. Results: Among 30–34 year olds of both sexes, observed ASMRs per 1,000 person years were 13.33 (95% CI: 10.75–16.52 in the period 1994–1999, 11.03 (95% CI: 8.84–13.77 in 2000–2004, and 6.22 (95% CI; 4.75–8.15 in 2005–2009. Among the same age group, the ASMRs estimated by the Spectrum model were 10.55, 11.13 and 8.15 for the periods 1994–1999, 2000–2004 and 2005–2009, respectively. The cohort data, for both sexes combined, showed that the 45Q15 declined from 39% (95% CI: 27–55% in 1994 to 22% (95% CI: 17–29% in 2009, whereas the Spectrum model predicted a decline from 43% in 1994 to 37% in 2009. Conclusion: From 1994 to 2009, the observed decrease in ASMRs was steeper in younger age groups than that predicted by the Spectrum model, perhaps because the Spectrum model under-estimated the ASMRs in 30–34 year olds in 1994–99. However, the Spectrum model predicted a greater decrease in 45Q15 mortality than observed in the cohort, although the reasons for this over-estimate are unclear.

  12. Interpretando correctamente en salud pública estimaciones puntuales, intervalos de confianza y contrastes de hipótesis Accurate interpretation of point estimates, confidence intervals, and hypothesis tests in public health

    Directory of Open Access Journals (Sweden)

    Manuel G Scotto

    2003-12-01

    Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.

  13. [Disability attributable to excess weight in Spain].

    Science.gov (United States)

    Martín-Ramiro, José Javier; Alvarez-Martín, Elena; Gil-Prieto, Ruth

    2014-08-19

    To estimate the disability attributable to higher than optimal body mass index in the Spanish population in 2006. Excess body weight prevalence data were obtained from the 2006 National Health Survey (NHS), while the prevalence of associated morbidities was extracted from the 2006 NHS and from a national hospital data base. Population attributable fractions were applied and disability attributable was expressed as years life with disability (YLD). In 2006, in the Spanish population aged 35-79 years, 791.650 YLD were lost due to higher than optimal body mass index (46.7% in males and 53.3% in females). Overweight (body mass index 25-29.9) accounted for 45.7% of total YLD. Males YLD were higher than females under 60. The 35-39 quinquennial group showed a difference for males of 16.6% while in the 74-79 group the difference was 23.8% for women. Osteoarthritis and chronic back pain accounted for 60% of YLD while hypertensive disease and type 2 diabetes mellitus were responsible of 37%. Excess body weight is a health risk related to the development of various diseases with an important associated disability burden and social and economical cost. YLD analysis is a useful monitor tool for disease control interventions. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  14. Excess electron transport in cryoobjects

    International Nuclear Information System (INIS)

    Eshchenko, D.G.; Storchak, V.G.; Brewer, J.H.; Cottrell, S.P.; Cox, S.F.J.

    2003-01-01

    Experimental results on excess electron transport in solid and liquid phases of Ne, Ar, and solid N 2 -Ar mixture are presented and compared with those for He. Muon spin relaxation technique in frequently switching electric fields was used to study the phenomenon of delayed muonium formation: excess electrons liberated in the μ + ionization track converge upon the positive muons and form Mu (μ + e - ) atoms. This process is shown to be crucially dependent upon the electron's interaction with its environment (i.e., whether it occupies the conduction band or becomes localized in a bubble of tens of angstroms in radius) and upon its mobility in these states. The characteristic lengths involved are 10 -6 -10 -4 cm, the characteristic times range from nanoseconds to tens microseconds. Such a microscopic length scale sometimes enables the electron spend its entire free lifetime in a state which may not be detected by conventional macroscopic techniques. The electron transport processes are compared in: liquid and solid helium (where electron is localized in buble); liquid and solid neon (where electrons are delocalized in solid and the coexistence of localized and delocalized electrons states was found in liquid recently); liquid and solid argon (where electrons are delocalized in both phases); orientational glass systems (solid N 2 -Ar mixtures), where our results suggest that electrons are localized in orientational glass. This scaling from light to heavy rare gases enables us to reveal new features of excess electron localization on microscopic scale. Analysis of the experimental data makes it possible to formulate the following tendency of the muon end-of-track structure in condensed rare gases. The muon-self track interaction changes from the isolated pair (muon plus the nearest track electron) in helium to multi-pair (muon in the vicinity of tens track electrons and positive ions) in argon

  15. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: evaluating the surface energy budget in a regional climate model with automatic weather station observations

    Science.gov (United States)

    Steffensen Schmidt, Louise; Aðalgeirsdóttir, Guðfinna; Guðmundsson, Sverrir; Langen, Peter L.; Pálsson, Finnur; Mottram, Ruth; Gascoin, Simon; Björnsson, Helgi

    2017-07-01

    A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980-2014, is used to estimate the evolution of the glacier surface mass balance (SMB). This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs) from the period 2001-2014, as well as in situ SMB measurements from the period 1995-2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995-2014) shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981-2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes currently not accounted for in RCMs

  16. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: evaluating the surface energy budget in a regional climate model with automatic weather station observations

    Directory of Open Access Journals (Sweden)

    L. S. Schmidt

    2017-07-01

    Full Text Available A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980–2014, is used to estimate the evolution of the glacier surface mass balance (SMB. This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs from the period 2001–2014, as well as in situ SMB measurements from the period 1995–2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995–2014 shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981–2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes

  17. Excess electron transport in cryoobjects

    CERN Document Server

    Eshchenko, D G; Brewer, J H; Cottrell, S P; Cox, S F J

    2003-01-01

    Experimental results on excess electron transport in solid and liquid phases of Ne, Ar, and solid N sub 2 -Ar mixture are presented and compared with those for He. Muon spin relaxation technique in frequently switching electric fields was used to study the phenomenon of delayed muonium formation: excess electrons liberated in the mu sup + ionization track converge upon the positive muons and form Mu (mu sup + e sup -) atoms. This process is shown to be crucially dependent upon the electron's interaction with its environment (i.e., whether it occupies the conduction band or becomes localized in a bubble of tens of angstroms in radius) and upon its mobility in these states. The characteristic lengths involved are 10 sup - sup 6 -10 sup - sup 4 cm, the characteristic times range from nanoseconds to tens microseconds. Such a microscopic length scale sometimes enables the electron spend its entire free lifetime in a state which may not be detected by conventional macroscopic techniques. The electron transport proc...

  18. Teleseismic Lg of Semipalatinsk and Novaya Zemlya Nuclear Explosions Recorded by the GRF (Gräfenberg) Array: Comparison with Regional Lg (BRV) and their Potential for Accurate Yield Estimation

    Science.gov (United States)

    Schlittenhardt, J.

    - A comparison of regional and teleseismic log rms (root-mean-square) Lg amplitude measurements have been made for 14 underground nuclear explosions from the East Kazakh test site recorded both by the BRV (Borovoye) station in Kazakhstan and the GRF (Gräfenberg) array in Germany. The log rms Lg amplitudes observed at the BRV regional station at a distance of 690km and at the teleseismic GRF array at a distance exceeding 4700km show very similar relative values (standard deviation 0.048 magnitude units) for underground explosions of different sizes at the Shagan River test site. This result as well as the comparison of BRV rms Lg magnitudes (which were calculated from the log rms amplitudes using an appropriate calibration) with magnitude determinations for P waves of global seismic networks (standard deviation 0.054 magnitude units) point to a high precision in estimating the relative source sizes of explosions from Lg-based single station data. Similar results were also obtained by other investigators (Patton, 1988; Ringdaletal., 1992) using Lg data from different stations at different distances.Additionally, GRF log rms Lg and P-coda amplitude measurements were made for a larger data set from Novaya Zemlya and East Kazakh explosions, which were supplemented with mb(Lg) amplitude measurements using a modified version of Nuttli's (1973, 1986a) method. From this test of the relative performance of the three different magnitude scales, it was found that the Lg and P-coda based magnitudes performed equally well, whereas the modified Nuttli mb(Lg) magnitudes show greater scatter when compared to the worldwide mb reference magnitudes. Whether this result indicates that the rms amplitude measurements are superior to the zero-to-peak amplitude measurement of a single cycle used for the modified Nuttli method, however, cannot be finally assessed, since the calculated mb(Lg) magnitudes are only preliminary until appropriate attenuation corrections are available for the

  19. Energy potential of the modified excess sludge

    Directory of Open Access Journals (Sweden)

    Zawieja Iwona

    2017-01-01

    Full Text Available On the basis of the SCOD value of excess sludge it is possible to estimate an amount of energy potentially obtained during the methane fermentation process. Based on a literature review, it has been estimated that from 1 kg of SCOD it is possible to obtain 3.48 kWh of energy. Taking into account the above methane and energy ratio (i.e. 10 kWh/1Nm3 CH4, it is possible to determine the volume of methane obtained from the tested sludge. Determination of potential energy of sludge is necessary for the use of biogas as a source of power generators as cogeneration and ensure the stability of this type of system. Therefore, the aim of the study was to determine the energy potential of excess sludge subjected to the thermal and chemical disintegration. In the case of thermal disintegration, test was conducted in the low temperature 80°C. The reagent used for the chemical modification was a peracetic acid, which in an aqueous medium having strong oxidizing properties. The time of chemical modification was 6 hours. Applied dose of the reagent was 1.0 ml CH3COOOH/L of sludge. By subjecting the sludge disintegration by the test methods achieved an increase in the SCOD value of modified sludge, indicating the improvement of biodegradability along with a concomitant increase in their energy potential. The obtained experimental production of biogas from disintegrated sludge confirmed that it is possible to estimate potential intensity of its production. The SCOD value of 2576 mg O2/L, in the case of chemical disintegration, was obtained for a dose of 1.0 ml CH3COOH/L. For this dose the pH value was equal 6.85. In the case of thermal disintegration maximum SCOD value was 2246 mg O2/L obtained at 80°C and the time of preparation 6 h. It was estimated that in case of thermal disintegration as well as for the chemical disintegration for selected parameters, the potential energy for model digester of active volume of 5L was, respectively, 0.193 and 0,118 kWh.

  20. Energy potential of the modified excess sludge

    Science.gov (United States)

    Zawieja, Iwona

    2017-11-01

    On the basis of the SCOD value of excess sludge it is possible to estimate an amount of energy potentially obtained during the methane fermentation process. Based on a literature review, it has been estimated that from 1 kg of SCOD it is possible to obtain 3.48 kWh of energy. Taking into account the above methane and energy ratio (i.e. 10 kWh/1Nm3 CH4), it is possible to determine the volume of methane obtained from the tested sludge. Determination of potential energy of sludge is necessary for the use of biogas as a source of power generators as cogeneration and ensure the stability of this type of system. Therefore, the aim of the study was to determine the energy potential of excess sludge subjected to the thermal and chemical disintegration. In the case of thermal disintegration, test was conducted in the low temperature 80°C. The reagent used for the chemical modification was a peracetic acid, which in an aqueous medium having strong oxidizing properties. The time of chemical modification was 6 hours. Applied dose of the reagent was 1.0 ml CH3COOOH/L of sludge. By subjecting the sludge disintegration by the test methods achieved an increase in the SCOD value of modified sludge, indicating the improvement of biodegradability along with a concomitant increase in their energy potential. The obtained experimental production of biogas from disintegrated sludge confirmed that it is possible to estimate potential intensity of its production. The SCOD value of 2576 mg O2/L, in the case of chemical disintegration, was obtained for a dose of 1.0 ml CH3COOH/L. For this dose the pH value was equal 6.85. In the case of thermal disintegration maximum SCOD value was 2246 mg O2/L obtained at 80°C and the time of preparation 6 h. It was estimated that in case of thermal disintegration as well as for the chemical disintegration for selected parameters, the potential energy for model digester of active volume of 5L was, respectively, 0.193 and 0,118 kWh.

  1. THE IMPORTANCE OF THE STANDARD SAMPLE FOR ACCURATE ESTIMATION OF THE CONCENTRATION OF NET ENERGY FOR LACTATION IN FEEDS ON THE BASIS OF GAS PRODUCED DURING THE INCUBATION OF SAMPLES WITH RUMEN LIQUOR

    Directory of Open Access Journals (Sweden)

    T ŽNIDARŠIČ

    2003-10-01

    Full Text Available The aim of this work was to examine the necessity of using the standard sample at the Hohenheim gas test. During a three year period, 24 runs of forage samples were incubated with rumen liquor in vitro. Beside the forage samples also the standard hay sample provided by the Hohenheim University (HFT-99 was included in the experiment. Half of the runs were incubated with rumen liquor of cattle and half with the rumen liquor of sheep. Gas produced during the 24 h incubation of standard sample was measured and compared to a declared value of sample HFT-99. Beside HFT-99, 25 test samples with known digestibility coefficients determined in vivo were included in the experiment. Based on the gas production of HFT-99, it was found that donor animal (cattle or sheep did not significantly affect the activity of rumen liquor (41.4 vs. 42.2 ml of gas per 200 mg dry matter, P>0.1. Neither differences between years (41.9, 41.2 and 42.3 ml of gas per 200 mg dry matter, P>0.1 were significant. However, a variability of about 10% (from 38.9 to 43.7 ml of gas per 200 mg dry matter was observed between runs. In the present experiment, the gas production in HFT-99 was about 6% lower than the value obtained by the Hohenheim University (41.8 vs. 44.43 ml per 200 mg dry matter. This indicates a systematic error between the laboratories. In the case of twenty-five test samples, correction on the basis of the standard sample reduced the average difference of the in vitro estimates of net energy for lactation (NEL from the in vivo determined values. It was concluded that, due to variation between runs and systematical differences in rumen liquor activity between two laboratories, the results of Hohenheim gas test have to be corrected on the basis of standard sample.

  2. Excess costs of social anxiety disorder in Germany.

    Science.gov (United States)

    Dams, Judith; König, Hans-Helmut; Bleibler, Florian; Hoyer, Jürgen; Wiltink, Jörg; Beutel, Manfred E; Salzer, Simone; Herpertz, Stephan; Willutzki, Ulrike; Strauß, Bernhard; Leibing, Eric; Leichsenring, Falk; Konnopka, Alexander

    2017-04-15

    Social anxiety disorder is one of the most frequent mental disorders. It is often associated with mental comorbidities and causes a high economic burden. The aim of our analysis was to estimate the excess costs of patients with social anxiety disorder compared to persons without anxiety disorder in Germany. Excess costs of social anxiety disorder were determined by comparing two data sets. Patient data came from the SOPHO-NET study A1 (n=495), whereas data of persons without anxiety disorder originated from a representative phone survey (n=3213) of the general German population. Missing data were handled by "Multiple Imputation by Chained Equations". Both data sets were matched using "Entropy Balancing". Excess costs were calculated from a societal perspective for the year 2014 using general linear regression with a gamma distribution and log-link function. Analyses considered direct costs (in- and outpatient treatment, rehabilitation, and professional and informal care) and indirect costs due to absenteeism from work. Total six-month excess costs amounted to 451€ (95% CI: 199€-703€). Excess costs were mainly caused by indirect excess costs due to absenteeism from work of 317€ (95% CI: 172€-461€), whereas direct excess costs amounted to 134€ (95% CI: 110€-159€). Costs for medication, unemployment and disability pension was not evaluated. Social anxiety disorder was associated with statistically significant excess costs, in particular due to indirect costs. As patients in general are often unaware of their disorder or its severity, awareness should be strengthened. Prevention and early treatment might reduce long-term indirect costs. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Excessive Additive Effect On Engine Oil Viscosity

    Directory of Open Access Journals (Sweden)

    Vojtěch Kumbár

    2014-01-01

    Full Text Available The main goal of this paper is excessive additive (for oil filling effect on engine oil dynamic viscosity. Research is focused to commercially distribute automotive engine oil with viscosity class 15W–40 designed for vans. There were prepared blends of new and used engine oil without and with oil additive in specific ratio according manufacturer’s recommendations. Dynamic viscosity of blends with additive was compared with pure new and pure used engine oil. The temperature dependence dynamic viscosity of samples was evaluated by using rotary viscometer with standard spindle. Concern was that the oil additive can moves engine oil of several viscosity grades up. It is able to lead to failure in the engine. Mathematical models were used for fitting experimental values of dynamic viscosity. Exponential fit function was selected, which was very accurate because the coefficient of determination R2 achieved high values (0.98–0.99. These models are able to predict viscosity behaviour blends of engine oil and additive.

  4. 34 CFR 668.166 - Excess cash.

    Science.gov (United States)

    2010-07-01

    ... the Secretary for the costs the Secretary incurred in providing that excess cash to the institution... 34 Education 3 2010-07-01 2010-07-01 false Excess cash. 668.166 Section 668.166 Education..., DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cash Management § 668.166 Excess cash. (a...

  5. 10 CFR 904.10 - Excess energy.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be available...

  6. 7 CFR 985.56 - Excess oil.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified as...

  7. Average Potential Temperature of the Upper Mantle and Excess Temperatures Beneath Regions of Active Upwelling

    Science.gov (United States)

    Putirka, K. D.

    2006-05-01

    The question as to whether any particular oceanic island is the result of a thermal mantle plume, is a question of whether volcanism is the result of passive upwelling, as at mid-ocean ridges, or active upwelling, driven by thermally buoyant material. When upwelling is passive, mantle temperatures reflect average or ambient upper mantle values. In contrast, sites of thermally driven active upwellings will have elevated (or excess) mantle temperatures, driven by some source of excess heat. Skeptics of the plume hypothesis suggest that the maximum temperatures at ocean islands are similar to maximum temperatures at mid-ocean ridges (Anderson, 2000; Green et al., 2001). Olivine-liquid thermometry, when applied to Hawaii, Iceland, and global MORB, belie this hypothesis. Olivine-liquid equilibria provide the most accurate means of estimating mantle temperatures, which are highly sensitive to the forsterite (Fo) contents of olivines, and the FeO content of coexisting liquids. Their application shows that mantle temperatures in the MORB source region are less than temperatures at both Hawaii and Iceland. The Siqueiros Transform may provide the most precise estimate of TpMORB because high MgO glass compositions there have been affected only by olivine fractionation, so primitive FeOliq is known; olivine thermometry yields TpSiqueiros = 1430 ±59°C. A global database of 22,000 MORB show that most MORB have slightly higher FeOliq than at Siqueiros, which translates to higher calculated mantle potential temperatures. If the values for Fomax (= 91.5) and KD (Fe-Mg)ol-liq (= 0.29) at Siqueiros apply globally, then upper mantle Tp is closer to 1485 ± 59°C. Averaging this global estimate with that recovered at Siqueiros yields TpMORB = 1458 ± 78°C, which is used to calculate plume excess temperatures, Te. The estimate for TpMORB defines the convective mantle geotherm, and is consistent with estimates from sea floor bathymetry and heat flow (Stein and Stein, 1992), and

  8. Phytoextraction of excess soil phosphorus

    International Nuclear Information System (INIS)

    Sharma, Nilesh C.; Starnes, Daniel L.; Sahi, Shivendra V.

    2007-01-01

    In the search for a suitable plant to be used in P phytoremediation, several species belonging to legume, vegetable and herb crops were grown in P-enriched soils, and screened for P accumulation potentials. A large variation in P concentrations of different plant species was observed. Some vegetable species such as cucumber (Cucumis sativus) and yellow squash (Cucurbita pepo var. melopepo) were identified as potential P accumulators with >1% (dry weight) P in their shoots. These plants also displayed a satisfactory biomass accumulation while growing on a high concentration of soil P. The elevated activities of phosphomonoesterase and phytase were observed when plants were grown in P-enriched soils, this possibly contributing to high P acquisition in these species. Sunflower plants also demonstrated an increased shoot P accumulation. This study shows that the phytoextraction of phosphorus can be effective using appropriate plant species. - Crop plants such as cucumber, squash and sunflower accumulate phosphorus and thus can be used in the phytoextraction of excess phosphorus from soils

  9. Phytoextraction of excess soil phosphorus

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Nilesh C. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Starnes, Daniel L. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Sahi, Shivendra V. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States)]. E-mail: shiv.sahi@wku.edu

    2007-03-15

    In the search for a suitable plant to be used in P phytoremediation, several species belonging to legume, vegetable and herb crops were grown in P-enriched soils, and screened for P accumulation potentials. A large variation in P concentrations of different plant species was observed. Some vegetable species such as cucumber (Cucumis sativus) and yellow squash (Cucurbita pepo var. melopepo) were identified as potential P accumulators with >1% (dry weight) P in their shoots. These plants also displayed a satisfactory biomass accumulation while growing on a high concentration of soil P. The elevated activities of phosphomonoesterase and phytase were observed when plants were grown in P-enriched soils, this possibly contributing to high P acquisition in these species. Sunflower plants also demonstrated an increased shoot P accumulation. This study shows that the phytoextraction of phosphorus can be effective using appropriate plant species. - Crop plants such as cucumber, squash and sunflower accumulate phosphorus and thus can be used in the phytoextraction of excess phosphorus from soils.

  10. Pharmacological treatment of aldosterone excess

    NARCIS (Netherlands)

    Deinum, J.; Riksen, N.P.; Lenders, J.W.M.

    2015-01-01

    Primary aldosteronism, caused by autonomous secretion of aldosterone by the adrenals, is estimated to account for at least 5% of hypertension cases. Hypertension explains the considerable cardiovascular morbidity caused by aldosteronism only partly, calling for specific anti-aldosterone drugs. The

  11. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  12. Excess mortality during the warm summer of 2015 in Switzerland.

    Science.gov (United States)

    Vicedo-Cabrera, Ana M; Ragettli, Martina S; Schindler, Christian; Röösli, Martin

    2016-01-01

    In Switzerland, summer 2015 was the second warmest summer for 150 years (after summer 2003). For summer 2003, a 6.9% excess mortality was estimated for Switzerland, which corresponded to 975 extra deaths. The impact of the heat in summer 2015 in Switzerland has not so far been evaluated. Daily age group-, gender- and region-specific all-cause excess mortality during summer (June-August) 2015 was estimated, based on predictions derived from quasi-Poisson regression models fitted to the daily mortality data for the 10 previous years. Estimates of excess mortality were derived for 1 June to 31 August, at national and regional level, as well as by month and for specific heat episodes identified in summer 2015 by use of seven different definitions. 804 excess deaths (5.4%, 95% confidence interval [CI] 3.0‒7.9%) were estimated for summer 2015 compared with previous summers, with the highest percentage obtained for July (11.6%, 95% CI 3.7‒19.4%). Seventy-seven percent of deaths occurred in people aged 75 years and older. Ticino (10.3%, 95% CI -1.8‒22.4%), Northwestern Switzerland (9.5%, 95% CI 2.7‒16.3%) and Espace Mittelland (8.9%, 95% CI 3.7‒14.1%) showed highest excess mortality during this three-month period, whereas fewer deaths than expected (-3.3%, 95% CI -9.2‒2.6%) were observed in Eastern Switzerland, the coldest region. The largest excess estimate of 23.7% was obtained during days when both maximum apparent and minimum night-time temperature reached extreme values (+32 and +20 °C, respectively), with 31.0% extra deaths for periods of three days or more. Heat during summer 2015 was associated with an increase in mortality in the warmer regions of Switzerland and it mainly affected older people. Estimates for 2015 were only a little lower compared to those of summer 2003, indicating that mitigation measures to prevent heat-related mortality in Switzerland have not become noticeably effective in the last 10 years.

  13. Androgen excess: Investigations and management.

    Science.gov (United States)

    Lizneva, Daria; Gavrilova-Jordan, Larisa; Walker, Walidah; Azziz, Ricardo

    2016-11-01

    Androgen excess (AE) is a key feature of polycystic ovary syndrome (PCOS) and results in, or contributes to, the clinical phenotype of these patients. Although AE will contribute to the ovulatory and menstrual dysfunction of these patients, the most recognizable sign of AE includes hirsutism, acne, and androgenic alopecia or female pattern hair loss (FPHL). Evaluation includes not only scoring facial and body terminal hair growth using the modified Ferriman-Gallwey method but also recording and possibly scoring acne and alopecia. Moreover, assessment of biochemical hyperandrogenism is necessary, particularly in patients with unclear or absent hirsutism, and will include assessing total and free testosterone (T), and possibly dehydroepiandrosterone sulfate (DHEAS) and androstenedione, although these latter contribute limitedly to the diagnosis. Assessment of T requires use of the highest quality assays available, generally radioimmunoassays with extraction and chromatography or mass spectrometry preceded by liquid or gas chromatography. Management of clinical hyperandrogenism involves primarily either androgen suppression, with a hormonal combination contraceptive, or androgen blockade, as with an androgen receptor blocker or a 5α-reductase inhibitor, or a combination of the two. Medical treatment should be combined with cosmetic treatment including topical eflornithine hydrochloride and short-term (shaving, chemical depilation, plucking, threading, waxing, and bleaching) and long-term (electrolysis, laser therapy, and intense pulse light therapy) cosmetic treatments. Generally, acne responds to therapy relatively rapidly, whereas hirsutism is slower to respond, with improvements observed as early as 3 months, but routinely only after 6 or 8 months of therapy. Finally, FPHL is the slowest to respond to therapy, if it will at all, and it may take 12 to 18 months of therapy for an observable response. Copyright © 2016. Published by Elsevier Ltd.

  14. Robust and accurate vectorization of line drawings.

    Science.gov (United States)

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  15. Excess cash holdings and shareholder value

    OpenAIRE

    Lee, Edward; Powell, Ronan

    2011-01-01

    We examine the determinants of corporate cash holdings in Australia and the impact on shareholder wealth of holding excess cash. Our results show that a trade-off model best explains the level of a firm’s cash holdings in Australia. We find that 'transitory' excess cash firms earn significantly higher risk-adjusted returns compared to 'persistent' excess cash firms, suggesting that the market penalises firms that hoard cash. The marginal value of cash also declines with larger cash balances, ...

  16. Syndromes associated with nutritional deficiency and excess.

    Science.gov (United States)

    Jen, Melinda; Yan, Albert C

    2010-01-01

    Normal functioning of the human body requires a balance between nutritional intake and metabolism, and imbalances manifest as nutritional deficiencies or excess. Nutritional deficiency states are associated with social factors (war, poverty, famine, and food fads), medical illnesses with malabsorption (such as Crohn disease, cystic fibrosis, and after bariatric surgery), psychiatric illnesses (eating disorders, autism, alcoholism), and medications. Nutritional excess states result from inadvertent or intentional excessive intake. Cutaneous manifestations of nutritional imbalance can herald other systemic manifestations. This contribution discusses nutritional deficiency and excess syndromes with cutaneous manifestations of particular interest to clinical dermatologists. Copyright © 2010. Published by Elsevier Inc.

  17. [Excess mortality associated with influenza in Spain in winter 2012].

    Science.gov (United States)

    León-Gómez, Inmaculada; Delgado-Sanz, Concepción; Jiménez-Jorge, Silvia; Flores, Víctor; Simón, Fernando; Gómez-Barroso, Diana; Larrauri, Amparo; de Mateo Ontañón, Salvador

    2015-01-01

    An excess of mortality was detected in Spain in February and March 2012 by the Spanish daily mortality surveillance system and the «European monitoring of excess mortality for public health action» program. The objective of this article was to determine whether this excess could be attributed to influenza in this period. Excess mortality from all causes from 2006 to 2012 were studied using time series in the Spanish daily mortality surveillance system, and Poisson regression in the European mortality surveillance system, as well as the FluMOMO model, which estimates the mortality attributable to influenza. Excess mortality due to influenza and pneumonia attributable to influenza were studied by a modification of the Serfling model. To detect the periods of excess, we compared observed and expected mortality. In February and March 2012, both the Spanish daily mortality surveillance system and the European mortality surveillance system detected a mortality excess of 8,110 and 10,872 deaths (mortality ratio (MR): 1.22 (95% CI:1.21-1.23) and 1.32 (95% CI: 1.29-1.31), respectively). In the 2011-12 season, the FluMOMO model identified the maximum percentage (97%) of deaths attributable to influenza in people older than 64 years with respect to the mortality total associated with influenza (13,822 deaths). The rate of excess mortality due to influenza and pneumonia and respiratory causes in people older than 64 years, obtained by the Serfling model, also reached a peak in the 2011-2012 season: 18.07 and 77.20, deaths per 100,000 inhabitants, respectively. A significant increase in mortality in elderly people in Spain was detected by the Spanish daily mortality surveillance system and by the European mortality surveillance system in the winter of 2012, coinciding with a late influenza season, with a predominance of the A(H3N2) virus, and a cold wave in Spain. This study suggests that influenza could have been one of the main factors contributing to the mortality excess

  18. BENEFITS OF WILDERNESS EXPANSION WITH EXCESS DEMAND FOR INDIAN PEAKS

    OpenAIRE

    Walsh, Richard G.; Gilliam, Lynde O.

    1982-01-01

    The contingent valuation approach was applied to the problem of estimating the recreation benefits from alleviating congestion at Indian Peaks wilderness area, Colorado. A random sample of 126 individuals were interviewed while hiking and backpacking at the study site in 1979. The results provide an empirical test and confirmation of the Cesario and Freeman proposals that under conditions of excess recreational demand for existing sites, enhanced opportunities to substitute newly designated s...

  19. The Accurate Particle Tracer Code

    OpenAIRE

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...

  20. Accurate x-ray spectroscopy

    International Nuclear Information System (INIS)

    Deslattes, R.D.

    1987-01-01

    Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data

  1. Variation in excess oxidant factor in combustion products of MHD generator. [Natural gas fuel

    Energy Technology Data Exchange (ETDEWEB)

    Pinkhasik, M S; Mironov, V D; Zakharko, Yu A; Plavinskii, A I

    1977-12-01

    Methods and difficulties associated with determining the excess oxidant factor for natural gas-fired MHD generators are discussed. The measurement of this factor is noted to be essential for the optimization of the combustion chamber and operation of MHD generators. A gas analyzer of electrochemical type is considered as a quick - response sensor capable of analyzing the composition of the combustion products and thus determining accurately the excess oxidant factor. The principle of operation of this sensor is discussed and the dependence of the electrochemical sensor emf on excess oxidant factor is shown. Three types of sensors are illustrated and tables of test results are provided.

  2. Bladder calculus presenting as excessive masturbation.

    Science.gov (United States)

    De Alwis, A C D; Senaratne, A M R D; De Silva, S M P D; Rodrigo, V S D

    2006-09-01

    Masturbation in childhood is a normal behaviour which most commonly begins at 2 months of age, and peaks at 4 years and in adolescence. However excessive masturbation causes anxiety in parents. We describe a boy with a bladder calculus presenting as excessive masturbation.

  3. The excessively crying infant : etiology and treatment

    NARCIS (Netherlands)

    Akhnikh, S.; Engelberts, A.C.; Sleuwen, B.E. van; Hoir, M.P. L’; Benninga, M.A.

    2014-01-01

    Excessive crying, often described as infantile colic, is the cause of 10% to 20% of all early pediatrician visits of infants aged 2 weeks to 3 months. Although usually benign and selflimiting, excessive crying is associated with parental exhaustion and stress. However, and underlying organic cause

  4. Part B Excess Cost Quick Reference Document

    Science.gov (United States)

    Ball, Wayne; Beridon, Virginia; Hamre, Kent; Morse, Amanda

    2011-01-01

    This Quick Reference Document has been prepared by the Regional Resource Center Program ARRA/Fiscal Priority Team to aid RRCP State Liaisons and other (Technical Assistance) TA providers in understanding the general context of state questions surrounding excess cost. As a "first-stop" for TA providers in investigating excess cost…

  5. Predictive Value of Intraoperative Thromboelastometry for the Risk of Perioperative Excessive Blood Loss in Infants and Children Undergoing Congenital Cardiac Surgery: A Retrospective Analysis.

    Science.gov (United States)

    Kim, Eunhee; Shim, Haeng Seon; Kim, Won Ho; Lee, Sue-Young; Park, Sun-Kyung; Yang, Ji-Hyuk; Jun, Tae-Gook; Kim, Chung Su

    2016-10-01

    Laboratory hemostatic variables and parameters of rotational thromboelastometry (ROTEM) were evaluated for their ability to predict perioperative excessive blood loss (PEBL) after congenital cardiac surgery. Retrospective and observational. Single, large university hospital. The study comprised 119 children younger than 10 years old undergoing congenital cardiac surgery with cardiopulmonary bypass (CPB). Intraoperative excessive blood loss was defined as estimated blood loss≥50% of estimated blood volume (EBV). Postoperative excessive blood loss was defined as measured postoperative chest tube and Jackson-Pratt drainage≥30% of EBV over 12 hours or≥50% of EBV over 24 hours in the intensive care unit. PEBL was defined as either intraoperative or postoperative excessive blood loss. External temogram (EXTEM) and fibrinogen temogram (FIBTEM) were analyzed before and after CPB with ROTEM and laboratory hemostatic variables. Multivariate logistic regression was performed. Incidence of PEBL was 19.3% (n = 23). Independent risk factors for PEBL were CPB time>120 minutes, post-CPB FIBTEM alpha-angle, clot firmness after 10 minutes20%. Laboratory hemostatic variables were not significant in multivariate analysis. The risk prediction model was developed from the results of multivariate analysis. The area under the receiver operating characteristic curve was 0.94 (95% confidence interval: 0.90-0.99). Post-CPB ROTEM may be useful for predicting both intraoperative and postoperative excessive blood loss in congenital cardiac surgery. This study provided an accurate prediction model for PEBL and supported intraoperative transfusion guidance using post-CPB FIBTEM-A10 and EXTEM-A10. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Thermophysical and excess properties of hydroxamic acids in DMSO

    International Nuclear Information System (INIS)

    Thakur, Piyush Kumar; Patre, Sandhya; Pande, Rama

    2013-01-01

    Graphical abstract: Excess molar volumes (V E ) vs mole fraction (x 2 ) of (A) N-o-tolyl-2-nitrobenzo- and (B) N-o-tolyl-4-nitrobenzo-hydroxamic acids in DMSO at different temperatures: ■, 298.15 K; ▪, 303.15 K; ▪, 308.15 K; ▪, 313.15 K; and ▪, 318.15 K. Highlights: ► ρ, n of the system hydroxamic acids in DMSO are reported. ► Apparent molar volume indicates superior solute–solvent interactions. ► Limiting apparent molar expansibility and coefficient of thermal expansion. ► Behaviour of this parameter suggest to hydroxamic acids act as structure maker. ► The excess properties have interpreted in terms of molecular interactions. -- Abstract: In this work, densities (ρ) and refractive indices (n) of N-o-tolyl-2-nitrobenzo- and N-o-tolyl-4-nitrobenzo-, hydroxamic acids have been determined for dimethyl sulfoxide (DMSO) as a function of their concentrations at T = (298.15, 303.15, 308.15, 313.15, and 318.15) K. These measurements were carried out to evaluate some important parameters, viz, molar volume (V), apparent molar volume (V ϕ ), limiting apparent molar volume (V ϕ 0 ), slope (S V ∗ ), molar refraction (R M ) and polarizability (α). The related parameters determined are limiting apparent molar expansivity (ϕ E 0 ), thermal expansion coefficient (α 2 ) and the Hepler constant (∂ 2 V ϕ 0 /∂T 2 ). Excess properties such as excess molar volume (V E ), deviations from the additivity rule of refractive index (n E ), excess molar refraction (R M E ) have also been evaluated. The excess properties were fitted to the Redlich–Kister equations to estimate their coefficients and standard deviations were determined. The variations of these excess parameters with composition were discussed from the viewpoint of intermolecular interactions in these solutions. The excess properties are found to be either positive or negative depending on the molecular interactions and the nature of solutions. Further, these parameters have been interpreted

  7. 75 FR 27572 - Monthly Report of Excess Income and Annual Report of Uses of Excess Income

    Science.gov (United States)

    2010-05-17

    ... Income and Annual Report of Uses of Excess Income AGENCY: Office of the Chief Information Officer, HUD... permitted to retain Excess Income for projects under terms and conditions established by HUD. Owners must request to retain some or all of their Excess Income. The request must be submitted through http://www.pay...

  8. Cardiovascular investigations of airline pilots with excessive cardiovascular risk.

    Science.gov (United States)

    Wirawan, I Made Ady; Aldington, Sarah; Griffiths, Robin F; Ellis, Chris J; Larsen, Peter D

    2013-06-01

    This study examined the prevalence of airline pilots who have an excessive cardiovascular disease (CVD) risk score according to the New Zealand Guideline Group (NZGG) Framingham-based Risk Chart and describes their cardiovascular risk assessment and investigations. A cross-sectional study was performed among 856 pilots employed in an Oceania based airline. Pilots with elevated CVD risk that had been previously evaluated at various times over the previous 19 yr were reviewed retrospectively from the airline's medical records, and the subsequent cardiovascular investigations were then described. There were 30 (3.5%) pilots who were found to have 5-yr CVD risk score of 10-15% or higher. Of the 29 pilots who had complete cardiac investigations data, 26 pilots underwent exercise electrocardiography (ECG), 2 pilots progressed directly to coronary angiograms and 1 pilot with abnormal echocardiogram was not examined further. Of the 26 pilots, 7 had positive or borderline exercise tests, all of whom subsequently had angiograms. One patient with a negative exercise test also had a coronary angiogram. Of the 9 patients who had coronary angiograms as a consequence of screening, 5 had significant disease that required treatment and 4 had either trivial disease or normal coronary arteries. The current approach to investigate excessive cardiovascular risk in pilots relies heavily on exercise electrocardiograms as a diagnostic test, and may not be optimal either to detect disease or to protect pilots from unnecessary invasive procedures. A more comprehensive and accurate cardiac investigation algorithm to assess excessive CVD risk in pilots is required.

  9. Millisecond Pulsars and the Galactic Center Excess

    Science.gov (United States)

    Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice; Ferrara, Elizabeth C.

    2017-08-01

    Various groups including the Fermi team have confirmed the spectrum of the gamma- ray excess in the Galactic Center (GCE). While some authors interpret the GCE as evidence for the annihilation of dark matter (DM), others have pointed out that the GCE spectrum is nearly identical to the average spectrum of Fermi millisecond pul- sars (MSP). Assuming the Galactic Center (GC) is populated by a yet unobserved source of MSPs that has similar properties to that of MSPs in the Galactic Disk (GD), we present results of a population synthesis of MSPs from the GC. We establish parameters of various models implemented in the simulation code by matching characteristics of 54 detected Fermi MSPs in the first point source catalog and 92 detected radio MSPs in a select group of thirteen radio surveys and targeting a birth rate of 45 MSPs per mega-year. As a check of our simulation, we find excellent agreement with the estimated numbers of MSPs in eight globular clusters. In order to reproduce the gamma-ray spectrum of the GCE, we need to populate the GC with 10,000 MSPs having a Navarro-Frenk-White distribution suggested by the halo density of DM. It may be possible for Fermi to detect some of these MSPs in the near future; the simulation also predicts that many GC MSPs have radio fluxes S1400above 10 �μJy observable by future pointed radio observations. We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).

  10. Excessive crying in infants with regulatory disorders.

    Science.gov (United States)

    Maldonado-Duran, M; Sauceda-Garcia, J M

    1996-01-01

    The authors point out a correlation between regulatory disorders in infants and the problem of excessive crying. The literature describes other behavioral problems involving excessive crying in very young children, but with little emphasis on this association. The recognition and diagnosis of regulatory disorders in infants who cry excessively can help practitioners design appropriate treatment interventions. Understanding these conditions can also help parents tailor their caretaking style, so that they provide appropriate soothing and stimulation to their child. In so doing, they will be better able to develop and preserve a satisfactory parent-child relationship, as well as to maintain their own sense of competence and self-esteem as parents.

  11. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  12. Accurate thickness measurement of graphene

    International Nuclear Information System (INIS)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-01-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)

  13. Predictors of excessive use of social media and excessive online gaming in Czech teenagers.

    Science.gov (United States)

    Spilková, Jana; Chomynová, Pavla; Csémy, Ladislav

    2017-12-01

    Background and aims Young people's involvement in online gaming and the use of social media are increasing rapidly, resulting in a high number of excessive Internet users in recent years. The objective of this paper is to analyze the situation of excessive Internet use among adolescents in the Czech Republic and to reveal determinants of excessive use of social media and excessive online gaming. Methods Data from secondary school students (N = 4,887) were collected within the 2015 European School Survey Project on Alcohol and Other Drugs. Logistic regression models were constructed to describe the individual and familial discriminative factors and the impact of the health risk behavior of (a) excessive users of social media and (b) excessive players of online games. Results The models confirmed important gender-specific distinctions - while girls are more prone to online communication and social media use, online gaming is far more prevalent among boys. The analysis did not indicate an influence of family composition on both the excessive use of social media and on excessive online gaming, and only marginal effects for the type of school attended. We found a connection between the excessive use of social media and binge drinking and an inverse relation between excessive online gaming and daily smoking. Discussion and conclusion The non-existence of significant associations between family environment and excessive Internet use confirmed the general, widespread of this phenomenon across the social and economic strata of the teenage population, indicating a need for further studies on the topic.

  14. Accurate predictions for the LHC made easy

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.

  15. Explaining CMS lepton excesses with supersymmetry

    CERN Multimedia

    CERN. Geneva; Prof. Allanach, Benjamin

    2014-01-01

    1) Kostas Theofilatos will give an introduction to CMS result 2) Ben Allanach: Several CMS analyses involving di-leptons have recently reported small 2.4-2.8 sigma local excesses: nothing to get too excited about, but worth keeping an eye on nonetheless. In particular, a search in the $lljj p_T$(miss) channel, a search for $W_R$ in the $lljj$ channel and a di-leptoquark search in the $lljj$ channel and $ljj p_T$(miss) channel have all yielded small excesses. We interpret the first excess in the MSSM, showing that the interpretation is viable in terms of other constraints, despite only having squark masses of around 1 TeV. We can explain the last three excesses with a single R-parity violating coupling that predicts a non-zero contribution to the neutrinoless double beta decay rate.

  16. Romanian welfare state between excess and failure

    Directory of Open Access Journals (Sweden)

    Cristina Ciuraru-Andrica

    2012-12-01

    Full Text Available Timely or not, our issue can bring back to life some prolific discussions, sometimes diametrical. We strike the social assistance, where, at this moment, is still uncertain if, once unleashed the excess, the failure will come inevitably or there is a “Salvation Ark”. However, the difference between the excess and the failure of the welfare state is almost intangible, the reason of his potential failure being actually the abuses made until the start of depression.

  17. Brazilian air traffic controllers exhibit excessive sleepiness.

    Science.gov (United States)

    Ribas, Valdenilson Ribeiro; de Almeida, Cláudia Ângela Vilela; Martins, Hugo André de Lima; Alves, Carlos Frederico de Oliveira; Alves, Marcos José Pinheiro Cândido; Carneiro, Severino Marcos de Oliveira; Ribas, Valéria Ribeiro; de Vasconcelos, Carlos Augusto Carvalho; Sougey, Everton Botelho; de Castro, Raul Manhães

    2011-01-01

    Excessive sleepiness (ES) is an increased tendency to initiate involuntary sleep for naps at inappropriate times. The objective of this study was to assess ES in air traffic controllers (ATCo). 45 flight protection professionals were evaluated, comprising 30 ATCo, subdivided into ATCo with ten or more years in the profession (ATCo≥10, n=15) and ATCo with less than ten years in the profession (ATCoair traffic controllers exhibit excessive sleepiness.

  18. Stellar origin of the 22Ne excess in cosmic rays

    International Nuclear Information System (INIS)

    Casse, M.; Paul, J.A.

    1982-01-01

    The 22 Ne excess at the cosmic-ray source is discussed in terms of a 22 Ne-rich component injected and accelerated by carbon-rich Wolf-Rayet stars. The overabundance of 22 Ne relative to 20 Ne predicted at the surface of these stars is estimated to a factor approx.120 with respect to solar system abundances. In order to give rise to a 22 Ne excess of about 3 at the cosmic-ray sources as inferred from observations, the carbon-rich Wolf-Rayet contribution to the primary cosmic-ray flux is to be at maximum 1/60. This component would be energized by strong stellar winds producing quasi-standing shocks around the Wolf-Rayet stars

  19. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  20. Is Excessive Polypharmacy a Transient or Persistent Phenomenon? A Nationwide Cohort Study in Taiwan

    Directory of Open Access Journals (Sweden)

    Yi-Jen Wang

    2018-02-01

    Full Text Available Objectives: Target populations with persistent polypharmacy should be identified prior to implementing strategies against inappropriate medication use, yet limited information regarding such populations is available. The main objectives were to explore the trends of excessive polypharmacy, whether transient or persistent, at the individual level. The secondary objectives were to identify the factors associated with persistently excessive polypharmacy and to estimate the probabilities for repeatedly excessive polypharmacy.Methods: Retrospective cohort analyses of excessive polypharmacy, defined as prescription of ≥ 10 medicines at an ambulatory visit, from 2001 to 2013 were conducted using a nationally representative claims database in Taiwan. Survival analyses with log-rank test of adult patients with first-time excessive polypharmacy were conducted to predict the probabilities, stratified by age and sex, of having repeatedly excessive polypharmacy.Results: During the study period, excessive polypharmacy occurred in 5.4% of patients for the first time. Among them, 63.9% had repeatedly excessive polypharmacy and the probabilities were higher in men and old people. Men versus women, and old versus middle-aged and young people had shorter median excessive polypharmacy-free times (9.4 vs. 5.5 months, 5.3 vs. 10.1 and 35.0 months, both p < 0.001. Overall, the probabilities of having no repeatedly excessive polypharmacy within 3 months, 6 months, and 1 year were 59.9, 53.6, and 48.1%, respectively.Conclusion: Although male and old patients were more likely to have persistently excessive polypharmacy, most cases of excessive polypharmacy were transient or did not re-appear in the short run. Systemic deprescribing measures should be tailored to at-risk groups.

  1. Is Excessive Polypharmacy a Transient or Persistent Phenomenon? A Nationwide Cohort Study in Taiwan

    Science.gov (United States)

    Wang, Yi-Jen; Chiang, Shu-Chiung; Lee, Pei-Chen; Chen, Yu-Chun; Chou, Li-Fang; Chou, Yueh-Ching; Chen, Tzeng-Ji

    2018-01-01

    Objectives: Target populations with persistent polypharmacy should be identified prior to implementing strategies against inappropriate medication use, yet limited information regarding such populations is available. The main objectives were to explore the trends of excessive polypharmacy, whether transient or persistent, at the individual level. The secondary objectives were to identify the factors associated with persistently excessive polypharmacy and to estimate the probabilities for repeatedly excessive polypharmacy. Methods: Retrospective cohort analyses of excessive polypharmacy, defined as prescription of ≥ 10 medicines at an ambulatory visit, from 2001 to 2013 were conducted using a nationally representative claims database in Taiwan. Survival analyses with log-rank test of adult patients with first-time excessive polypharmacy were conducted to predict the probabilities, stratified by age and sex, of having repeatedly excessive polypharmacy. Results: During the study period, excessive polypharmacy occurred in 5.4% of patients for the first time. Among them, 63.9% had repeatedly excessive polypharmacy and the probabilities were higher in men and old people. Men versus women, and old versus middle-aged and young people had shorter median excessive polypharmacy-free times (9.4 vs. 5.5 months, 5.3 vs. 10.1 and 35.0 months, both p < 0.001). Overall, the probabilities of having no repeatedly excessive polypharmacy within 3 months, 6 months, and 1 year were 59.9, 53.6, and 48.1%, respectively. Conclusion: Although male and old patients were more likely to have persistently excessive polypharmacy, most cases of excessive polypharmacy were transient or did not re-appear in the short run. Systemic deprescribing measures should be tailored to at-risk groups. PMID:29515446

  2. Cancers attributable to excess body weight in Canada in 2010

    Directory of Open Access Journals (Sweden)

    Dianne Zakaria

    2017-07-01

    Full Text Available Introduction: Excess body weight (body mass index [BMI] ≥ 25.00 kg/m2 is an established risk factor for diabetes, hypertension and cardiovascular disease, but its relationship to cancer is lesser-known. This study used population attributable fractions (PAFs to estimate the cancer burden attributable to excess body weight in Canadian adults (aged 25+ years in 2010. Methods: We estimated PAFs using relative risk (RR estimates from the World Cancer Research Fund International Continuous Update Project, BMI-based estimates of overweight (25.00 kg/m2–29.99 kg/m2 and obesity (30.00+ kg/m2 from the 2000–2001 Canadian Community Health Survey, and cancer case counts from the Canadian Cancer Registry. PAFs were based on BMI corrected for the bias in self-reported height and weight. Results: In Canada in 2010, an estimated 9645 cancer cases were attributable to excess body weight, representing 5.7% of all cancer cases (males 4.9%, females 6.5%. When limiting the analysis to types of cancer associated with high BMI, the PAF increased to 14.9% (males 17.5%, females 13.3%. Types of cancer with the highest PAFs were esophageal adenocarcinoma (42.2%, kidney (25.4%, gastric cardia (20.7%, liver (20.5%, colon (20.5% and gallbladder (20.2% for males, and esophageal adenocarcinoma (36.1%, uterus (35.2%, gallbladder (23.7% and kidney (23.0% for females. Types of cancer with the greatest number of attributable cases were colon (1445, kidney (780 and advanced prostate (515 for males, and uterus (1825, postmenopausal breast (1765 and colon (675 for females. Irrespective of sex or type of cancer, PAFs were highest in the Prairies (except Alberta and the Atlantic region and lowest in British Columbia and Quebec. Conclusion: The cancer burden attributable to excess body weight is substantial and will continue to rise in the near future because of the rising prevalence of overweight and obesity in Canada.

  3. The effect of external dynamic loads on the lifetime of rolling element bearings: accurate measurement of the bearing behaviour

    International Nuclear Information System (INIS)

    Jacobs, W; Boonen, R; Sas, P; Moens, D

    2012-01-01

    Accurate prediction of the lifetime of rolling element bearings is a crucial step towards a reliable design of many rotating machines. Recent research emphasizes an important influence of external dynamic loads on the lifetime of bearings. However, most lifetime calculations of bearings are based on the classical ISO 281 standard, neglecting this influence. For bearings subjected to highly varying loads, this leads to inaccurate estimations of the lifetime, and therefore excessive safety factors during the design and unexpected failures during operation. This paper presents a novel test rig, developed to analyse the behaviour of rolling element bearings subjected to highly varying loads. Since bearings are very precise machine components, their motion can only be measured in an accurately controlled environment. Otherwise, noise from other components and external influences such as temperature variations will dominate the measurements. The test rig is optimised to perform accurate measurements of the bearing behaviour. Also, the test bearing is fitted in a modular structure, which guarantees precise mounting and allows testing different types and sizes of bearings. Finally, a fully controlled multi-axial static and dynamic load is imposed on the bearing, while its behaviour is monitored with capacitive proximity probes.

  4. Excess under-5 female mortality across India: a spatial analysis using 2011 census data

    Directory of Open Access Journals (Sweden)

    Christophe Z Guilmoto, PhD

    2018-06-01

    Full Text Available Summary: Background: Excess female mortality causes half of the missing women (estimated deficit of women in countries with suspiciously low proportion of females in their population today. Globally, most of these avoidable deaths of women occur during childhood in China and India. We aimed to estimate excess female under-5 mortality rate (U5MR for India's 35 states and union territories and 640 districts. Methods: Using the summary birth history method (or Brass method, we derived district-level estimates of U5MR by sex from 2011 census data. We used data from 46 countries with no evidence of gender bias for mortality to estimate the effects and intensity of excess female mortality at district level. We used a detailed spatial and statistical analysis to highlight the correlates of excess mortality at district level. Findings: Excess female U5MR was 18·5 per 1000 livebirths (95% CI 13·1–22·6 in India 2000–2005, which corresponds to an estimated 239 000 excess deaths (169 000–293 000 per year. More than 90% of districts had excess female mortality, but the four largest states in northern India (Uttar Pradesh, Bihar, Rajasthan, and Madhya Pradesh accounted for two-thirds of India's total number. Low economic development, gender inequity, and high fertility were the main predictors of excess female mortality. Spatial analysis confirmed the strong spatial clustering of postnatal discrimination against girls in India. Interpretation: The considerable effect of gender bias on mortality in India highlights the need for more proactive engagement with the issue of postnatal sex discrimination and a focus on the northern districts. Notably, these regions are not the same as those most affected by skewed sex ratio at birth. Funding: None.

  5. Excess under-5 female mortality across India: a spatial analysis using 2011 census data.

    Science.gov (United States)

    Guilmoto, Christophe Z; Saikia, Nandita; Tamrakar, Vandana; Bora, Jayanta Kumar

    2018-06-01

    Excess female mortality causes half of the missing women (estimated deficit of women in countries with suspiciously low proportion of females in their population) today. Globally, most of these avoidable deaths of women occur during childhood in China and India. We aimed to estimate excess female under-5 mortality rate (U5MR) for India's 35 states and union territories and 640 districts. Using the summary birth history method (or Brass method), we derived district-level estimates of U5MR by sex from 2011 census data. We used data from 46 countries with no evidence of gender bias for mortality to estimate the effects and intensity of excess female mortality at district level. We used a detailed spatial and statistical analysis to highlight the correlates of excess mortality at district level. Excess female U5MR was 18·5 per 1000 livebirths (95% CI 13·1-22·6) in India 2000-2005, which corresponds to an estimated 239 000 excess deaths (169 000-293 000) per year. More than 90% of districts had excess female mortality, but the four largest states in northern India (Uttar Pradesh, Bihar, Rajasthan, and Madhya Pradesh) accounted for two-thirds of India's total number. Low economic development, gender inequity, and high fertility were the main predictors of excess female mortality. Spatial analysis confirmed the strong spatial clustering of postnatal discrimination against girls in India. The considerable effect of gender bias on mortality in India highlights the need for more proactive engagement with the issue of postnatal sex discrimination and a focus on the northern districts. Notably, these regions are not the same as those most affected by skewed sex ratio at birth. None. Copyright © 2018 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY-NC-ND 4.0 license. Published by Elsevier Ltd.. All rights reserved.

  6. 3D Volumetry and its Correlation Between Postoperative Gastric Volume and Excess Weight Loss After Sleeve Gastrectomy.

    Science.gov (United States)

    Hanssen, Andrés; Plotnikov, Sergio; Acosta, Geylor; Nuñez, José Tomas; Haddad, José; Rodriguez, Carmen; Petrucci, Claudia; Hanssen, Diego; Hanssen, Rafael

    2018-03-01

    The volume of the postoperative gastric remnant is a key factor in excess weight loss (EWL) after sleeve gastrectomy (SG). Traditional methods to estimate gastric volume (GV) after bariatric procedures are often inaccurate; usually conventional biplanar contrast studies are used. Thirty patients who underwent SG were followed prospectively and evaluated at 6 months after the surgical procedure, performing 3D CT reconstruction and gastric volumetry, to establish its relationship with EWL. The gastric remnant was distended with effervescent sodium bicarbonate given orally. Helical CT images were acquired and reconstructed; GV was estimated with the software of the CT device. The relationship between GV and EWL was analyzed. The study allowed estimating the GV in all patients. A dispersion diagram showed an inverse relationship between GV and %EWL. 55.5% of patients with GV ≤ 100 ml had %EWL 25-75% and 38.8% had an %EWL above 75% and patients with GV ≥ 100 ml had an %EWL under 25% (50% of patients) or between 25 and 75% (50% of this group). The Pearson's correlation coefficient was R = 6.62, with bilateral significance (p ≤ .01). The Chi-square result correlating GV and EWL showed a significance of .005 (p ≤ .01). The 3D reconstructions showed accurately the shape and anatomic details of the gastric remnant. 3D volumetry CT scans accurately estimate GV after SG. A significant relationship between GV and EWL 6 months after SG was established, seeming that GV ≥ 100 ml at 6 months of SG is associated with poor EWL.

  7. Annealing behaviour of excess carriers in neutron-transmutation-doped silicon

    International Nuclear Information System (INIS)

    Maekawa, T.; Nogami, S.; Inoue, S.

    1993-01-01

    In neutron-transmutation-doped silicon wafers excess carriers are clearly generated over the transmuted phosphorus atoms. The generation occurs for annealing temperatures above 900 o C. The maximum percentage of excess carriers obtained is about 24.5% of the final carrier concentration. Due to the difference in energy of generation and removal, the excess carriers can be removed by annealing above 800 o C. The radiation damage responsible for generation of excess carriers is fairly thermostable in the range of annealing temperatures below 800 o C. From deep-level transient spectroscopy measurements, it is found that the radiation damage remains insensitive to changes in carrier concentration. The activation energies of excess carrier generation and removal are estimated from the analysis of the thermal and temporal behaviours of radiation damage in the annealing process. (Author)

  8. Antidepressant induced excessive yawning and indifference

    Directory of Open Access Journals (Sweden)

    Bruno Palazzo Nazar

    2015-03-01

    Full Text Available Introduction Antidepressant induced excessive yawning has been described as a possible side effect of pharmacotherapy. A syndrome of indifference has also been described as another possible side effect. The frequency of those phenomena and their physiopathology are unknown. They are both considered benign and reversible after antidepressant discontinuation but severe cases with complications as temporomandibular lesions, have been described. Methods We report two unprecedented cases in which excessive yawning and indifference occurred simultaneously as side effects of antidepressant therapy, discussing possible physiopathological mechanisms for this co-occurrence. Case 1: A male patient presented excessive yawning (approximately 80/day and apathy after venlafaxine XR treatment. Symptoms reduced after a switch to escitalopram, with a reduction to 50 yawns/day. Case 2: A female patient presented excessive yawning (approximately 25/day and inability to react to environmental stressors with desvenlafaxine. Conclusion Induction of indifference and excessive yawning may be modulated by serotonergic and noradrenergic mechanisms. One proposal to unify these side effects would be enhancement of serotonin in midbrain, especially paraventricular and raphe nucleus.

  9. New vector bosons and the diphoton excess

    Directory of Open Access Journals (Sweden)

    Jorge de Blas

    2016-08-01

    Full Text Available We consider the possibility that the recently observed diphoton excess at ∼750 GeV can be explained by the decay of a scalar particle (φ to photons. If the scalar is the remnant of a symmetry-breaking sector of some new gauge symmetry, its coupling to photons can be generated by loops of the charged massive vectors of the broken symmetry. If these new W′ vector bosons carry color, they can also generate an effective coupling to gluons. In this case the diphoton excess could be entirely explained in a simplified model containing just φ and W′. On the other hand, if W′ does not carry color, we show that, provided additional colored particles exist to generate the required φ to gluon coupling, the diphoton excess could be explained by the same W′ commonly invoked to explain the diboson excess at ∼2 TeV. We also explore possible connections between the diphoton and diboson excesses with the anomalous tt¯ forward–backward asymmetry.

  10. Accurate determination of light elements by charged particle activation analysis

    International Nuclear Information System (INIS)

    Shikano, K.; Shigematsu, T.

    1989-01-01

    To develop accurate determination of light elements by CPAA, accurate and practical standardization methods and uniform chemical etching are studied based on determination of carbon in gallium arsenide using the 12 C(d,n) 13 N reaction and the following results are obtained: (1)Average stopping power method with thick target yield is useful as an accurate and practical standardization method. (2)Front surface of sample has to be etched for accurate estimate of incident energy. (3)CPAA is utilized for calibration of light element analysis by physical method. (4)Calibration factor of carbon analysis in gallium arsenide using the IR method is determined to be (9.2±0.3) x 10 15 cm -1 . (author)

  11. Using an eye tracker for accurate eye movement artifact correction

    NARCIS (Netherlands)

    Kierkels, J.J.M.; Riani, J.; Bergmans, J.W.M.; Boxtel, van G.J.M.

    2007-01-01

    We present a new method to correct eye movement artifacts in electroencephalogram (EEG) data. By using an eye tracker, whose data cannot be corrupted by any electrophysiological signals, an accurate method for correction is developed. The eye-tracker data is used in a Kalman filter to estimate which

  12. Explaining excess morbidity amongst homeless shelter users

    DEFF Research Database (Denmark)

    Benjaminsen, Lars; Birkelund, Jesper Fels

    2018-01-01

    AIMS: This article analyses excess morbidity amongst homeless shelter users compared to the general Danish population. The study provides an extensive control for confounding and investigates to what extent excess morbidity is explained by homelessness or other risk factors. METHODS: Data set...... includes administrative micro-data for 4,068,926 Danes who were 23 years or older on 1 January 2007. Nationwide data on shelter use identified 14,730 individuals as shelter users from 2002 to 2006. Somatic diseases were measured from 2007 to 2011 through diagnosis data from hospital discharges. The risk...... of somatic diseases amongst shelter users was analysed through a multivariate model that decomposed the total effect into a direct effect and indirect effects mediated by other risk factors. RESULTS: The excess morbidity associated with shelter use is substantially lower than in studies that did not include...

  13. Same-sign dilepton excesses and vector-like quarks

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chuan-Ren [Department of Physics, National Taiwan Normal University,Ting-Chou Road, Taipei 116, Taiwan (China); Cheng, Hsin-Chia [Department of Physics, University of California,One Shields Avenue, Davis, CA 95616 (United States); Low, Ian [Department of Physics and Astronomy, Northwestern University,Sheridan Road, Evanston, IL 60208 (United States); High Energy Physics Division, Argonne National Laboratory,S. Cass Avenue, Argonne, IL 60439 (United States)

    2016-03-15

    Multiple analyses from ATLAS and CMS collaborations, including searches for ttH production, supersymmetric particles and vector-like quarks, observed excesses in the same-sign dilepton channel containing b-jets and missing transverse energy in the LHC Run 1 data. In the context of little Higgs theories with T parity, we explain these excesses using vector-like T-odd quarks decaying into a top quark, a W boson and the lightest T-odd particle (LTP). For heavy vector-like quarks, decay topologies containing the LTP have not been searched for at the LHC. The bounds on the masses of the T-odd quarks can be estimated in a simplified model approach by adapting the search limits for top/bottom squarks in supersymmetry. Assuming a realistic decay branching fraction, a benchmark with a 750 GeV T-odd b{sup ′} quark is proposed. We also comment on the possibility to fit excesses in different analyses in a common framework.

  14. Excess Sodium Tetraphenylborate and Intermediates Decomposition Studies

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, M.J.

    1998-12-07

    The stability of excess amounts of sodium tetraphenylborate (NaTPB) in the In-Tank Precipitation (ITP) facility depends on a number of variables. Concentration of palladium, initial benzene, and sodium ion as well as temperature provide the best opportunities for controlling the decomposition rate. This study examined the influence of these four variable on the reactivity of palladium-catalyzed sodium tetraphenylborate decomposition. Also, single effects tests investigated the reactivity of simulants with continuous stirring and nitrogen ventilation, with very high benzene concentrations, under washed sodium concentrations, with very high palladium concentrations, and with minimal quantities of excess NaTPB.

  15. Thermodynamic interrelation between excess limiting partial molar characteristics of a liquid nonelectrolyte

    International Nuclear Information System (INIS)

    Ivanov, Evgeniy V.

    2012-01-01

    Highlights: ► Excess limiting molar volume may be regarded as a solvation-related characteristic. ► Volumetric and enthalpic effects of dissolution are interrelated thermodynamically. ► Possibility to estimate the partial change in solute compressibility is described. - Abstract: On the basis of thermodynamic analysis, it is concluded that the excess limiting partial molar volume, like the excess limiting partial molar enthalpy, can be considered as a solvation-related characteristic of a liquid nonelectrolyte. A thermodynamically grounded interrelation between standard volumetric and enthalpic effects of solution of a liquid nonelectrolyte (or series of nonelectrolytes) is suggested.

  16. Is acne a sign of androgen excess disorder or not?

    Science.gov (United States)

    Uysal, Gulsum; Sahin, Yılmaz; Unluhizarci, Kursad; Ferahbas, Ayten; Uludag, Semih Zeki; Aygen, Ercan; Kelestimur, Fahrettin

    2017-04-01

    Acne is not solely a cosmetic problem. The clinical importance of acne in the estimation of androgen excess disorders is controversial. Recently, the Amsterdam ESHRE/ASRM-sponsored third PCOS Consensus Workshop Group suggested that acne is not commonly associated with hyperandrogenemia and therefore should not be regarded as evidence of hyperandrogenemia. Our aim was to investigate whether acne is a sign of androgen excess disorder or not. This is a cross sectional study that was performed in a university hospital involving 207 women, aged between 18 and 45 years, suffering mainly from acne. The women were assigned as polycystic ovary syndrome (PCOS), idiopathic hirsutism (IH), idiopathic hyperandrogenemia (IHA). Women with acne associated with any of the androgen excess disorders mentioned above were named as hyperandrogenemia associated acne (HAA). Women with acne but without hirsutism and hyperandrogenemia and having ovulatory cycles were named as "isolated acne". Serum luteinizing hormone, follicle stimulating hormone, estradiol, progesterone, 17-hydroxyprogesterone, dehydroepiandrosterone-sulfate (DHEAS), androstenedione, total testosterone and lipid levels were measured. Acne score was similar between the women with isolated acne and HAA. The most common cause for acne was PCOS and only 28% of the women had isolated acne. 114 (55%) women had at least one raised serum androgen level. In this study, 72% of acneic women had clinical and/or biochemical hyperandrogenemia. In contrast to the suggestion of ESHRE/ASRM-sponsored third PCOS Consensus Workshop Group, our data indicate that the presence of androgen excess disorders should be evaluated in women presenting with acne. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. 26 CFR 54.4981A-1T - Tax on excess distributions and excess accumulations (temporary).

    Science.gov (United States)

    2010-04-01

    ... Revenue Code of 1986, as added by section 1133 of the Tax Reform Act of 1986 (Pub. L. 99-514) (TRA '86...) Determine the value of the individual's adjusted account balance on the next valuation date by adding (or... 26 Internal Revenue 17 2010-04-01 2010-04-01 false Tax on excess distributions and excess...

  18. Excessive daytime sleepiness among depressed patients | Mume ...

    African Journals Online (AJOL)

    Background: Excessive daytime sleepiness (EDS) has been reported among depressed patients in many populations. Many depressed patients seek medical attention partly to deal with EDS, but this sleep disorder is often overlooked in clinical practice. Objectives: The objectives of this study were to determine the ...

  19. Excessive daytime sleepiness, nocturnal sleep duration and ...

    African Journals Online (AJOL)

    Background and objectives. Short nocturnal sleep duration resulting in sleep debt may be a cause of excessive daytime sleepiness (EDS). Severity of depression (psychopathology) has been found to be directly related to EDS. There is an association between sleep duration and mental health, so there may therefore be an ...

  20. Excess Sodium Tetraphenylborate and Intermediates Decomposition Studies

    International Nuclear Information System (INIS)

    Barnes, M.J.; Peterson, R.A.

    1998-04-01

    The stability of excess amounts of sodium tetraphenylborate (NaTPB) in the In-Tank Precipitation (ITP) facility depends on a number of variables. Concentration of palladium, initial benzene, and sodium ion as well as temperature provide the best opportunities for controlling the decomposition rate. This study examined the influence of these four variables on the reactivity of palladium-catalyzed sodium tetraphenylborate decomposition. Also, single effects tests investigated the reactivity of simulants with continuous stirring and nitrogen ventilation, with very high benzene concentrations, under washed sodium concentrations, with very high palladium concentrations, and with minimal quantities of excess NaTPB. These tests showed the following.The testing demonstrates that current facility configuration does not provide assured safety of operations relative to the hazards of benzene (in particular to maintain the tank headspace below 60 percent of the lower flammability limit (lfl) for benzene generation rates of greater than 7 mg/(L.h)) from possible accelerated reaction of excess NaTPB. Current maximal operating temperatures of 40 degrees C and the lack of protection against palladium entering Tank 48H provide insufficient protection against the onset of the reaction. Similarly, control of the amount of excess NaTPB, purification of the organic, or limiting the benzene content of the slurry (via stirring) and ionic strength of the waste mixture prove inadequate to assure safe operation

  1. Excess Sodium Tetraphenylborate and Intermediates Decomposition Studies

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, M.J. [Westinghouse Savannah River Company, AIKEN, SC (United States); Peterson , R.A.

    1998-04-01

    The stability of excess amounts of sodium tetraphenylborate (NaTPB) in the In-Tank Precipitation (ITP) facility depends on a number of variables. Concentration of palladium, initial benzene, and sodium ion as well as temperature provide the best opportunities for controlling the decomposition rate. This study examined the influence of these four variables on the reactivity of palladium-catalyzed sodium tetraphenylborate decomposition. Also, single effects tests investigated the reactivity of simulants with continuous stirring and nitrogen ventilation, with very high benzene concentrations, under washed sodium concentrations, with very high palladium concentrations, and with minimal quantities of excess NaTPB. These tests showed the following.The testing demonstrates that current facility configuration does not provide assured safety of operations relative to the hazards of benzene (in particular to maintain the tank headspace below 60 percent of the lower flammability limit (lfl) for benzene generation rates of greater than 7 mg/(L.h)) from possible accelerated reaction of excess NaTPB. Current maximal operating temperatures of 40 degrees C and the lack of protection against palladium entering Tank 48H provide insufficient protection against the onset of the reaction. Similarly, control of the amount of excess NaTPB, purification of the organic, or limiting the benzene content of the slurry (via stirring) and ionic strength of the waste mixture prove inadequate to assure safe operation.

  2. Can Excess Bilirubin Levels Cause Learning Difficulties?

    Science.gov (United States)

    Pretorius, E.; Naude, H.; Becker, P. J.

    2002-01-01

    Examined learning problems in South African sample of 7- to 14-year-olds whose mothers reported excessively high infant bilirubin shortly after the child's birth. Found that this sample had lowered verbal ability with the majority also showing impaired short-term and long-term memory. Findings suggested that impaired formation of astrocytes…

  3. Excessive daytime sleepiness among depressed patients | Mume ...

    African Journals Online (AJOL)

    Abstract. Background: Excessive daytime sleepiness (EDS) has been reported among depressed patients in many populations. Many depressed patients seek medical attention partly to deal with EDS, but this sleep disorder is often overlooked in clinical practice. Objectives: The objectives of this study were to determine the ...

  4. Phospholipids as Biomarkers for Excessive Alcohol Use

    Science.gov (United States)

    2013-10-01

    NUMBER Phospholipids as Biomarkers for Excessive Alcohol Use 5b. GRANT NUMBER W81XWH-12-1-0497 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...suspected of alcohol abuse. Toxicol Lett, 151(1), 235-241. Graham, D. P., Cardon , A. L., & Uhl, G. R. (2008). An update on substance use and treatment

  5. Excessive infant crying: Definitions determine risk groups

    NARCIS (Netherlands)

    Reijneveld, S.A.; Brugman, E.; Hirasing, R.A.

    2002-01-01

    We assessed risk groups for excessive infant crying using 10 published definitions, in 3179 children aged 1-6 months (response: 96.5%). Risk groups regarding parental employment, living area, lifestyle, and obstetric history varied by definition. This may explain the existence of conflicting

  6. Excessive prices as abuse of dominance?

    DEFF Research Database (Denmark)

    la Cour, Lisbeth; Møllgaard, Peter

    2007-01-01

    firm abused its position by charging excessive prices. We also test whether tightening of the Danish competition act has altered the pricing behaviour on the market. We discuss our results in the light of a Danish competition case against the dominant cement producer that was abandoned by the authority...

  7. Excessive oral intake caffeine altered cerebral cortex ...

    African Journals Online (AJOL)

    Caffeine is commonly consumed in an effort to enhance speed in performance and wakefulness. However, little is known about the deleterious effects it can produce on the brain, this study aimed at determining the extents of effects and damage that can be caused by excessive consumption of caffeine on the cerebral cortex ...

  8. 43 CFR 426.12 - Excess land.

    Science.gov (United States)

    2010-10-01

    ... advertise the sale of the property in farm journals and in newspapers within the county in which the land...; (ii) A recordable contract is amended to remove excess land when the landowner's entitlement increases... eligible buyer at a price and on terms approved by Reclamation; (C) The sale from the previous landowner is...

  9. Excessive Gambling and Online Gambling Communities.

    Science.gov (United States)

    Sirola, Anu; Kaakinen, Markus; Oksanen, Atte

    2018-04-05

    The Internet provides an accessible context for online gambling and gambling-related online communities, such as discussion forums for gamblers. These communities may be particularly attractive to young gamblers who are active Internet users. The aim of this study was to examine the use of gambling-related online communities and their relevance to excessive gambling among 15-25-year-old Finnish Internet users (N = 1200). Excessive gambling was assessed by using the South Oaks Gambling Screen. Respondents were asked in a survey about their use of various kinds of gambling-related online communities, and sociodemographic and behavioral factors were adjusted. The results of the study revealed that over half (54.33%) of respondents who had visited gambling-related online communities were either at-risk gamblers or probable pathological gamblers. Discussion in these communities was mainly based on sharing gambling tips and experiences, and very few respondents said that they related to gambling problems and recovery. In three different regression models, visiting gambling-related online communities was a significant predictor for excessive gambling (with 95% confidence level) even after adjusting confounding factors. The association of visiting such sites was even stronger among probable pathological gamblers than among at-risk gamblers. Health professionals working with young people should be aware of the role of online communities in terms of development and persistence of excessive gambling. Monitoring the use of online gambling communities as well as utilizing recovery-oriented support both offline and online would be important in preventing further problems. Gambling platforms should also include warnings about excessive gambling and provide links to helpful sources.

  10. [Excess weight and abdominal obesity in Galician children and adolescents].

    Science.gov (United States)

    Pérez-Ríos, Mónica; Santiago-Pérez, María Isolina; Leis, Rosaura; Martínez, Ana; Malvar, Alberto; Hervada, Xurxo; Suanzes, Jorge

    2017-12-06

    The excess of weight, mainly obesity, during childhood and adolescence increases morbimortality risk in adulthood. The aim of this article is to estimate both the overall prevalence, as well as according to age and gender, of underweight, overweight, obesity and abdominal obesity among schoolchildren aged between 6-15-years-old in the school year 2013-2014. Data were taken from a cross-sectional community-based study carried out on a representative sample, by gender and age, of the Galician population aged between 6 and 15 years-old. The prevalence of underweight, overweight, and obese children (Cole's cut-off criteria) and abdominal obesity (Taylor's cut-off criteria) were estimated after performing objective measurements of height, weight and waist circumference at school. A total of 7,438 students were weighed and measured in 137 schools. The prevalence of overweight and obese individuals was 24.9% and 8.2%, respectively. The prevalence of abdominal obesity was 25.8%, with 4% of children with normal weight having abdominal obesity. These data highlight the need to promote primary prevention measures at early ages in order to decrease the occurrence of the premature onset of disease in the future. The prevalence of excess weight is underestimated if abdominal obesity is not taken into consideration. Copyright © 2017. Publicado por Elsevier España, S.L.U.

  11. A transimpedance amplifier for excess noise measurements of high junction capacitance avalanche photodiodes

    International Nuclear Information System (INIS)

    Green, James E; David, John P R; Tozer, Richard C

    2012-01-01

    This paper reports a novel and versatile system for measuring excess noise and multiplication in avalanche photodiodes (APDs), using a bipolar junction transistor based transimpedance amplifier front-end and based on phase-sensitive detection, which permits accurate measurement in the presence of a high dark current. The system can reliably measure the excess noise factor of devices with capacitance up to 5 nF. This system has been used to measure thin, large area Si pin APDs and the resulting data are in good agreement with measurements of the same devices obtained from a different noise measurement system which will be reported separately. (paper)

  12. Analysis of excess reactivity of JOYO MK-III performance test core

    International Nuclear Information System (INIS)

    Maeda, Shigetaka; Yokoyama, Kenji

    2003-10-01

    JOYO is currently being upgraded to the high performance irradiation bed JOYO MK-III core'. The MK-III core is divided into two fuel regions with different plutonium contents. To obtain a higher neutron flux, the active core height was reduced from 55 cm to 50 cm. The reflector subassemblies were replaced by shielding subassemblies in the outer two rows. Twenty of the MK-III outer core fuel subassemblies in the performance test core were partially burned in the transition core. Four irradiation test rigs, which do not contain any fuel material, were loaded in the center of the performance test core. In order to evaluate the excess reactivity of MK-III performance test core accurately, we evaluated it by applying not only the JOYO MK-II core management code system MAGI, but also the MK-III core management code system HESTIA, the JUPITER standard analysis method and the Monte Carlo method with JFS-3-J3.2R content set. The excess reactivity evaluations obtained by the JUPITER standard analysis method were corrected to results based on transport theory with zero mesh-size in space and angle. A bias factor based on the MK-II 35th core, which sensitivity was similar to MK-III performance test core's, was also applied, except in the case where an adjusted nuclear cross-section library was used. Exact three-dimensional, pin-by-pin geometry and continuous-energy cross sections were used in the Monte Carlo calculation. The estimated error components associated with cross-sections, methods correction factors and the bias factor were combined based on Takeda's theory. Those independently calculated values agree well and range from 2.8 to 3.4%Δk/kk'. The calculation result of the MK-III core management code system HESTLA was 3.13% Δk/kk'. The estimated errors for bias method range from 0.1 to 0.2%Δk/kk'. The error in the case using adjusted cross-section was 0.3%Δk/kk'. (author)

  13. Estimation of combined sewer overflow discharge

    DEFF Research Database (Denmark)

    Ahm, Malte; Thorndahl, Søren Liedtke; Nielsen, Jesper Ellerbæk

    2016-01-01

    Combined sewer overflow (CSO) structures are constructed to effectively discharge excess water during heavy rainfall, to protect the urban drainage system from hydraulic overload. Consequently, most CSO structures are not constructed according to basic hydraulic principles for ideal measurement......-balance in combined sewer catchments. A closed mass-balance is an advantage for calibration of all urban drainage models based on mass-balance principles. This study presents three different software sensor concepts based on local water level sensors, which can be used to estimate CSO discharge volumes from hydraulic...... complex CSO structures. The three concepts was tested and verified under real practical conditions. All three concepts were accurate when compared to electromagnetic flow measurements....

  14. Excess Readmission vs Excess Penalties: Maximum Readmission Penalties as a Function of Socioeconomics and Geography.

    Science.gov (United States)

    Caracciolo, Chris; Parker, Devin; Marshall, Emily; Brown, Jeremiah

    2017-08-01

    The Hospital Readmission Reduction Program (HRRP) penalizes hospitals with "excess" readmissions up to 3% of Medicare reimbursement. Approximately 75% of eligible hospitals received penalties, worth an estimated $428 million, in fiscal year 2015. To identify demographic and socioeconomic disparities between matched and localized maximum-penalty and no-penalty hospitals. A case-control study in which cases included were hospitals to receive the maximum 3% penalty under the HRRP during the 2015 fiscal year. Controls were drawn from no-penalty hospitals and matched to cases by hospital characteristics (primary analysis) or geographic proximity (secondary analysis). A selectiion of 3383 US hospitals eligible for HRRP. Thirty-nine case and 39 control hospitals from the HRRP cohort. Socioeconomic status variables were collected by the American Community Survey. Hospital and health system characteristics were drawn from Centers for Medicare and Medicaid Services, American Hospital Association, and Dartmouth Atlas of Health Care. The statistical analysis was conducted using Student t tests. Thirty-nine hospitals received a maximum penalty. Relative to controls, maximum-penalty hospitals in counties with lower SES profiles are defined by increased poverty rates (19.1% vs 15.5%, = 0.015) and lower rates of high school graduation (82.2% vs 87.5%, = 0.001). County level age, sex, and ethnicity distributions were similar between cohorts. Cases were more likely than controls to be in counties with low socioeconomic status; highlighting potential unintended consequences of national benchmarks for phenomena underpinned by environmental factors; specifically, whether maximum penalties under the HRRP are a consequence of underperforming hospitals or a manifestation of underserved communities. © 2017 Society of Hospital Medicine

  15. Should excessive worry be required for a diagnosis of generalized anxiety disorder? Results from the US National Comorbidity Survey Replication.

    Science.gov (United States)

    Ruscio, Ayelet Meron; Lane, Michael; Roy-Byrne, Peter; Stang, Paul E; Stein, Dan J; Wittchen, Hans-Ulrich; Kessler, Ronald C

    2005-12-01

    Excessive worry is required by DSM-IV, but not ICD-10, for a diagnosis of generalized anxiety disorder (GAD). No large-scale epidemiological study has ever examined the implications of this requirement for estimates of prevalence, severity, or correlates of GAD. Data were analyzed from the US National Comorbidity Survey Replication, a nationally representative, face-to-face survey of adults in the USA household population that was fielded in 2001-2003. DSM-IV GAD was assessed with Version 3.0 of the WHO Composite International Diagnostic Interview. Non-excessive worriers meeting all other DSM-IV criteria for GAD were compared with respondents who met full GAD criteria as well as with other survey respondents to consider the implications of removing the excessiveness requirement. The estimated lifetime prevalence of GAD increases by approximately 40% when the excessiveness requirement is removed. Excessive GAD begins earlier in life, has a more chronic course, and is associated with greater symptom severity and psychiatric co-morbidity than non-excessive GAD. However, non-excessive cases nonetheless evidence substantial persistence and impairment of GAD, high rates of treatment-seeking, and significantly elevated co-morbidity compared with respondents without GAD. Non-excessive cases also have sociodemographic characteristics and familial aggregation of GAD comparable to excessive cases. Individuals who meet all criteria for GAD other than excessiveness have a somewhat milder presentation than those with excessive worry, yet resemble excessive worriers in a number of important ways. These findings challenge the validity of the excessiveness requirement and highlight the need for further research into the optimal definition of GAD.

  16. Androgen excess in women: experience with over 1000 consecutive patients.

    Science.gov (United States)

    Azziz, R; Sanchez, L A; Knochenhauer, E S; Moran, C; Lazenby, J; Stephens, K C; Taylor, K; Boots, L R

    2004-02-01

    The objective of the present study was to estimate the prevalence of the different pathological conditions causing clinically evident androgen excess and to document the degree of long-term success of suppressive and/or antiandrogen hormonal therapy in a large consecutive population of patients. All patients presenting for evaluation of symptoms potentially related to androgen excess between October 1987 and June 2002 were evaluated, and the data were maintained prospectively in a computerized database. For the assessment of therapeutic response, a retrospective review of the medical chart was performed, after the exclusion of those patients seeking fertility therapy only, or with inadequate follow-up or poor compliance. A total of 1281 consecutive patients were seen during the study period. Excluded from analysis were 408 patients in whom we were unable to evaluate hormonal status, determine ovulatory status, or find any evidence of androgen excess. In the remaining population of 873 patients, the unbiased prevalence of androgen-secreting neoplasms was 0.2%, 21-hydroxylase-deficient classic adrenal hyperplasia (CAH) was 0.6%, 21-hydroxylase-deficient nonclassic adrenal hyperplasia (NCAH) was 1.6%, hyperandrogenic insulin-resistant acanthosis nigricans (HAIRAN) syndrome was 3.1%, idiopathic hirsutism was 4.7%, and polycystic ovary syndrome (PCOS) was 82.0%. Fifty-nine (6.75%) patients had elevated androgen levels and hirsutism but normal ovulation. A total of 257 patients were included in the assessment of the response to hormonal therapy. The mean duration of follow-up was 33.5 months (range, 6-155). Hirsutism improved in 86%, menstrual dysfunction in 80%, acne in 81%, and hair loss in 33% of patients. The major side effects noted were irregular vaginal bleeding (16.1%), nausea (13.0%), and headaches (12.6%); only 36.6% of patients never complained of side effects. In this large study of consecutive patients presenting with clinically evident androgen excess

  17. Stroke Volume estimation using aortic pressure measurements and aortic cross sectional area: Proof of concept.

    Science.gov (United States)

    Kamoi, S; Pretty, C G; Chiew, Y S; Pironet, A; Davidson, S; Desaive, T; Shaw, G M; Chase, J G

    2015-08-01

    Accurate Stroke Volume (SV) monitoring is essential for patient with cardiovascular dysfunction patients. However, direct SV measurements are not clinically feasible due to the highly invasive nature of measurement devices. Current devices for indirect monitoring of SV are shown to be inaccurate during sudden hemodynamic changes. This paper presents a novel SV estimation using readily available aortic pressure measurements and aortic cross sectional area, using data from a porcine experiment where medical interventions such as fluid replacement, dobutamine infusions, and recruitment maneuvers induced SV changes in a pig with circulatory shock. Measurement of left ventricular volume, proximal aortic pressure, and descending aortic pressure waveforms were made simultaneously during the experiment. From measured data, proximal aortic pressure was separated into reservoir and excess pressures. Beat-to-beat aortic characteristic impedance values were calculated using both aortic pressure measurements and an estimate of the aortic cross sectional area. SV was estimated using the calculated aortic characteristic impedance and excess component of the proximal aorta. The median difference between directly measured SV and estimated SV was -1.4ml with 95% limit of agreement +/- 6.6ml. This method demonstrates that SV can be accurately captured beat-to-beat during sudden changes in hemodynamic state. This novel SV estimation could enable improved cardiac and circulatory treatment in the critical care environment by titrating treatment to the effect on SV.

  18. ILLUSION OF EXCESSIVE CONSUMPTION AND ITS EFFECTS

    Directory of Open Access Journals (Sweden)

    MUNGIU-PUPĂZAN MARIANA CLAUDIA

    2015-12-01

    Full Text Available The aim is to explore, explain and describe this phenomenon to a better understanding of it and also the relationship between advertising and the consumer society members. This paper aims to present an analysis of excessive and unsustainable consumption, the evolution of a phenomenon, and the ability to find a way to combat. Unfortunately, studies show that this tendency to accumulate more than we need to consume excess means that almost all civilizations fined and placed dogmatic among the values that children learn early in life. This has been perpetuated since the time when the goods or products does not get so easy as today. Anti-consumerism has emerged in response to this economic system, not on the long term. We are witnessing the last two decades to establish a new phase of consumer capitalism: society hiperconsumtion.

  19. Country Fundamentals and Currency Excess Returns

    Directory of Open Access Journals (Sweden)

    Daehwan Kim

    2014-06-01

    Full Text Available We examine whether country fundamentals help explain the cross-section of currency excess returns. For this purpose, we consider fundamental variables such as default risk, foreign exchange rate regime, capital control as well as interest rate in the multi-factor model framework. Our empirical results show that fundamental factors explain a large part of the cross-section of currency excess returns. The zero-intercept restriction of the factor model is not rejected for most currencies. They also reveal that our factor model with country fundamentals performs better than a factor model with usual investment-style factors. Our main empirical results are based on 2001-2010 balanced panel data of 19 major currencies. This paper may fill the gap between country fundamentals and practitioners' strategies on currency investment.

  20. Excess plutonium disposition: The deep borehole option

    International Nuclear Information System (INIS)

    Ferguson, K.L.

    1994-01-01

    This report reviews the current status of technologies required for the disposition of plutonium in Very Deep Holes (VDH). It is in response to a recent National Academy of Sciences (NAS) report which addressed the management of excess weapons plutonium and recommended three approaches to the ultimate disposition of excess plutonium: (1) fabrication and use as a fuel in existing or modified reactors in a once-through cycle, (2) vitrification with high-level radioactive waste for repository disposition, (3) burial in deep boreholes. As indicated in the NAS report, substantial effort would be required to address the broad range of issues related to deep bore-hole emplacement. Subjects reviewed in this report include geology and hydrology, design and engineering, safety and licensing, policy decisions that can impact the viability of the concept, and applicable international programs. Key technical areas that would require attention should decisions be made to further develop the borehole emplacement option are identified

  1. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    Science.gov (United States)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  2. Neurological manifestations of excessive alcohol consumption.

    Science.gov (United States)

    Planas-Ballvé, Anna; Grau-López, Laia; Morillas, Rosa María; Planas, Ramón

    2017-12-01

    This article reviews the different acute and chronic neurological manifestations of excessive alcohol consumption that affect the central or peripheral nervous system. Several mechanisms can be implicated depending on the disorder, ranging from nutritional factors, alcohol-related toxicity, metabolic changes and immune-mediated mechanisms. Recognition and early treatment of these manifestations is essential given their association with high morbidity and significantly increased mortality. Copyright © 2017 Elsevier España, S.L.U., AEEH y AEG. All rights reserved.

  3. Equine goiter associated with excess dietary iodine.

    Science.gov (United States)

    Eroksuz, H; Eroksuz, Y; Ozer, H; Ceribasi, A O; Yaman, I; Ilhan, N

    2004-06-01

    Naturally occurring goiter cases are described in 2 newborn Arabian foals whose mares were supplemented with excess iodine during the final 24 w of the pregnancy. Six nursing foals and 2 mares were also affected clinically with thyroid hypertrophy. At least 12 times the maximum tolerable level of iodine supplementation was given, as the daily iodine intake for each mare was 299 mg. The prevalence of goiter cases was 2 and 9% in the mares and foals, respectively.

  4. Contrast induced hyperthyroidism due to iodine excess

    OpenAIRE

    Mushtaq, Usman; Price, Timothy; Laddipeerla, Narsing; Townsend, Amanda; Broadbridge, Vy

    2009-01-01

    Iodine induced hyperthyroidism is a thyrotoxic condition caused by exposure to excessive iodine. Historically this type of hyperthyroidism has been described in areas of iodine deficiency. With advances in medicine, iodine induced hyperthyroidism has been observed following the use of drugs containing iodine—for example, amiodarone, and contrast agents used in radiological imaging. In elderly patients it is frequently difficult to diagnose and control contrast related hyperthyroidism, as most...

  5. Excessive current in wide superconducting films

    International Nuclear Information System (INIS)

    Volotskaya, V.G.; Sivakov, A.G.; Turutanov, O.G.

    1986-01-01

    The resistive state of a wide long film due to superconductivity destruction by current is studied. The voltage-independent excess current I 0 is observed on I-V curves at high transport currents. The two-dimensional image of the current-carrying sample obtained by laser scanning technique in this current range indicates that the whole film is in the resistive state. The current I 0 is measured as a function of magnetic field and SHF power

  6. Search for bright stars with infrared excess

    Energy Technology Data Exchange (ETDEWEB)

    Raharto, Moedji, E-mail: moedji@as.itb.ac.id [Astronomy Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)

    2014-03-24

    Bright stars, stars with visual magnitude smaller than 6.5, can be studied using small telescope. In general, if stars are assumed as black body radiator, then the color in infrared (IR) region is usually equal to zero. Infrared data from IRAS observations at 12 and 25μm (micron) with good flux quality are used to search for bright stars (from Bright Stars Catalogues) with infrared excess. In magnitude scale, stars with IR excess is defined as stars with IR color m{sub 12}−m{sub 25}>0; where m{sub 12}−m{sub 25} = −2.5log(F{sub 12}/F{sub 25})+1.56, where F{sub 12} and F{sub 25} are flux density in Jansky at 12 and 25μm, respectively. Stars with similar spectral type are expected to have similar color. The existence of infrared excess in the same spectral type indicates the existence of circum-stellar dust, the origin of which is probably due to the remnant of pre main-sequence evolution during star formation or post AGB evolution or due to physical process such as the rotation of those stars.

  7. Earnings Quality Measures and Excess Returns.

    Science.gov (United States)

    Perotti, Pietro; Wagenhofer, Alfred

    2014-06-01

    This paper examines how commonly used earnings quality measures fulfill a key objective of financial reporting, i.e., improving decision usefulness for investors. We propose a stock-price-based measure for assessing the quality of earnings quality measures. We predict that firms with higher earnings quality will be less mispriced than other firms. Mispricing is measured by the difference of the mean absolute excess returns of portfolios formed on high and low values of a measure. We examine persistence, predictability, two measures of smoothness, abnormal accruals, accruals quality, earnings response coefficient and value relevance. For a large sample of US non-financial firms over the period 1988-2007, we show that all measures except for smoothness are negatively associated with absolute excess returns, suggesting that smoothness is generally a favorable attribute of earnings. Accruals measures generate the largest spread in absolute excess returns, followed by smoothness and market-based measures. These results lend support to the widespread use of accruals measures as overall measures of earnings quality in the literature.

  8. An update on the LHC monojet excess

    Science.gov (United States)

    Asadi, Pouya; Buckley, Matthew R.; DiFranzo, Anthony; Monteux, Angelo; Shih, David

    2018-03-01

    In previous work, we identified an anomalous number of events in the LHC jets+MET searches characterized by low jet multiplicity and low-to-moderate transverse energy variables. Here, we update this analysis with results from a new ATLAS search in the monojet channel which also shows a consistent excess. As before, we find that this "monojet excess" is well-described by the resonant production of a heavy colored state decaying to a quark and a massive invisible particle. In the combined ATLAS and CMS data, we now find a local (global) preference of 3.3 σ (2.5 σ) for the new physics model over the Standard Model-only hypothesis. As the signal regions containing the excess are systematics-limited, we consider additional cuts to enhance the signal-to-background ratio. We show that binning finer in H T and requiring the jets to be more central can increase S/B by a factor of ˜1 .5.

  9. Internet addiction or excessive internet use.

    Science.gov (United States)

    Weinstein, Aviv; Lejoyeux, Michel

    2010-09-01

    Problematic Internet addiction or excessive Internet use is characterized by excessive or poorly controlled preoccupations, urges, or behaviors regarding computer use and Internet access that lead to impairment or distress. Currently, there is no recognition of internet addiction within the spectrum of addictive disorders and, therefore, no corresponding diagnosis. It has, however, been proposed for inclusion in the next version of the Diagnostic and Statistical Manual of Mental Disorder (DSM). To review the literature on Internet addiction over the topics of diagnosis, phenomenology, epidemiology, and treatment. Review of published literature between 2000-2009 in Medline and PubMed using the term "internet addiction. Surveys in the United States and Europe have indicated prevalence rate between 1.5% and 8.2%, although the diagnostic criteria and assessment questionnaires used for diagnosis vary between countries. Cross-sectional studies on samples of patients report high comorbidity of Internet addiction with psychiatric disorders, especially affective disorders (including depression), anxiety disorders (generalized anxiety disorder, social anxiety disorder), and attention deficit hyperactivity disorder (ADHD). Several factors are predictive of problematic Internet use, including personality traits, parenting and familial factors, alcohol use, and social anxiety. Although Internet-addicted individuals have difficulty suppressing their excessive online behaviors in real life, little is known about the patho-physiological and cognitive mechanisms responsible for Internet addiction. Due to the lack of methodologically adequate research, it is currently impossible to recommend any evidence-based treatment of Internet addiction.

  10. On the Incidence of Wise Infrared Excess Among Solar Analog, Twin, and Sibling Stars

    Energy Technology Data Exchange (ETDEWEB)

    Da Costa, A. D.; Martins, B. L. Canto; Lima Jr, J. E.; Silva, D. Freire da; Medeiros, J. R. De [Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, Campus Universitário, Natal, RN, 59072-970 (Brazil); Leão, I. C. [European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748 Garching (Germany); Freitas, D. B. de, E-mail: dgerson@fisica.ufrn.br [Departamento de Física, Universidade Federal do Ceará, Caixa Postal 6030, Campus do Pici, 60455-900, Fortaleza, Ceará (Brazil)

    2017-03-01

    This study presents a search for infrared (IR) excess in the 3.4, 4.6, 12, and 22 μ m bands in a sample of 216 targets, composed of solar sibling, twin, and analog stars observed by the Wide-field Infrared Survey Explorer ( WISE ) mission. In general, an IR excess suggests the existence of warm dust around a star. We detected 12 μ m and/or 22 μ m excesses at the 3 σ level of confidence in five solar analog stars, corresponding to a frequency of 4.1% of the entire sample of solar analogs analyzed, and in one out of 29 solar sibling candidates, confirming previous studies. The estimation of the dust properties shows that the sources with IR excesses possess circumstellar material with temperatures that, within the uncertainties, are similar to that of the material found in the asteroid belt in our solar system. No photospheric flux excess was identified at the W1 (3.4 μ m) and W2 (4.6 μ m) WISE bands, indicating that, in the majority of stars of the present sample, no detectable dust is generated. Interestingly, among the 60 solar twin stars analyzed in this work, no WISE photospheric flux excess was detected. However, a null-detection excess does not necessarily indicate the absence of dust around a star because different causes, including dynamic processes and instrument limitations, can mask its presence.

  11. The Role of Androgen Excess in Metabolic Dysfunction in Women : Androgen Excess and Female Metabolic Dysfunction.

    Science.gov (United States)

    Escobar-Morreale, Héctor F

    2017-01-01

    Polycystic ovary syndrome (PCOS) is characterized by the association of androgen excess with chronic oligoovulation and/or polycystic ovarian morphology, yet metabolic disorders and classic and nonclassic cardiovascular risk factors cluster in these women from very early in life. This chapter focuses on the mechanisms underlying the association of PCOS with metabolic dysfunction, focusing on the role of androgen excess on the development of visceral adiposity and adipose tissue dysfunction.

  12. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture

    Directory of Open Access Journals (Sweden)

    Zhiquan Gao

    2015-09-01

    Full Text Available Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain.

  13. Luminosity excesses in low-mass young stellar objects - a statistical study

    International Nuclear Information System (INIS)

    Strom, K.M.; Strom, S.E.; Kenyon, S.J.; Hartmann, L.

    1988-01-01

    This paper presents a statistical study in which the observed total luminosity is compared quantitatively with an estimate of the stellar luminosity for a sample of 59 low-mass young stellar objects (YSOs) in the Taurus-Auriga complex. In 13 of the analyzed YSOs, luminosity excesses greater than 0.20 are observed together with greater than 0.6 IR excesses, which typically contribute the bulk of the observed excess luminosity and are characterized by spectral energy distributions which are flat or rise toward long wavelengths. The analysis suggests that YSOs showing the largest luminosity excesses typically power optical jets and/or molecular outflows or have strong winds, as evidenced by the presence of O I emission, indicating a possible correlation between accretion and mass-outflow properties. 38 references

  14. Excess Mortality Attributable to Extreme Heat in New York City, 1997-2013.

    Science.gov (United States)

    Matte, Thomas D; Lane, Kathryn; Ito, Kazuhiko

    2016-01-01

    Extreme heat event excess mortality has been estimated statistically to assess impacts, evaluate heat emergency response, and project climate change risks. We estimated annual excess non-external-cause deaths associated with extreme heat events in New York City (NYC). Extreme heat events were defined as days meeting current National Weather Service forecast criteria for issuing heat advisories in NYC based on observed maximum daily heat index values from LaGuardia Airport. Outcomes were daily non-external-cause death counts for NYC residents from May through September from 1997 to 2013 (n = 337,162). The cumulative relative risk (CRR) of death associated with extreme heat events was estimated in a Poisson time-series model for each year using an unconstrained distributed lag for days 0-3 accommodating over dispersion, and adjusting for within-season trends and day of week. Attributable death counts were computed by year based on individual year CRRs. The pooled CRR per extreme heat event day was 1.11 (95%CI 1.08-1.14). The estimated annual excess non-external-cause deaths attributable to heat waves ranged from -14 to 358, with a median of 121. Point estimates of heat wave-attributable deaths were greater than 0 in all years but one and were correlated with the number of heat wave days (r = 0.81). Average excess non-external-cause deaths associated with extreme heat events were nearly 11-fold greater than hyperthermia deaths. Estimated extreme heat event-associated excess deaths may be a useful indicator of the impact of extreme heat events, but single-year estimates are currently too imprecise to identify short-term changes in risk.

  15. Accurate deuterium spectroscopy for fundamental studies

    Science.gov (United States)

    Wcisło, P.; Thibault, F.; Zaborowski, M.; Wójtewicz, S.; Cygan, A.; Kowzan, G.; Masłowski, P.; Komasa, J.; Puchalski, M.; Pachucki, K.; Ciuryło, R.; Lisak, D.

    2018-07-01

    We present an accurate measurement of the weak quadrupole S(2) 2-0 line in self-perturbed D2 and theoretical ab initio calculations of both collisional line-shape effects and energy of this rovibrational transition. The spectra were collected at the 247-984 Torr pressure range with a frequency-stabilized cavity ring-down spectrometer linked to an optical frequency comb (OFC) referenced to a primary time standard. Our line-shape modeling employed quantum calculations of molecular scattering (the pressure broadening and shift and their speed dependencies were calculated, while the complex frequency of optical velocity-changing collisions was fitted to experimental spectra). The velocity-changing collisions are handled with the hard-sphere collisional kernel. The experimental and theoretical pressure broadening and shift are consistent within 5% and 27%, respectively (the discrepancy for shift is 8% when referred not to the speed averaged value, which is close to zero, but to the range of variability of the speed-dependent shift). We use our high pressure measurement to determine the energy, ν0, of the S(2) 2-0 transition. The ab initio line-shape calculations allowed us to mitigate the expected collisional systematics reaching the 410 kHz accuracy of ν0. We report theoretical determination of ν0 taking into account relativistic and QED corrections up to α5. Our estimation of the accuracy of the theoretical ν0 is 1.3 MHz. We observe 3.4σ discrepancy between experimental and theoretical ν0.

  16. Partitioning of excess mortality in population-based cancer patient survival studies using flexible parametric survival models

    Directory of Open Access Journals (Sweden)

    Eloranta Sandra

    2012-06-01

    Full Text Available Abstract Background Relative survival is commonly used for studying survival of cancer patients as it captures both the direct and indirect contribution of a cancer diagnosis on mortality by comparing the observed survival of the patients to the expected survival in a comparable cancer-free population. However, existing methods do not allow estimation of the impact of isolated conditions (e.g., excess cardiovascular mortality on the total excess mortality. For this purpose we extend flexible parametric survival models for relative survival, which use restricted cubic splines for the baseline cumulative excess hazard and for any time-dependent effects. Methods In the extended model we partition the excess mortality associated with a diagnosis of cancer through estimating a separate baseline excess hazard function for the outcomes under investigation. This is done by incorporating mutually exclusive background mortality rates, stratified by the underlying causes of death reported in the Swedish population, and by introducing cause of death as a time-dependent effect in the extended model. This approach thereby enables modeling of temporal trends in e.g., excess cardiovascular mortality and remaining cancer excess mortality simultaneously. Furthermore, we illustrate how the results from the proposed model can be used to derive crude probabilities of death due to the component parts, i.e., probabilities estimated in the presence of competing causes of death. Results The method is illustrated with examples where the total excess mortality experienced by patients diagnosed with breast cancer is partitioned into excess cardiovascular mortality and remaining cancer excess mortality. Conclusions The proposed method can be used to simultaneously study disease patterns and temporal trends for various causes of cancer-consequent deaths. Such information should be of interest for patients and clinicians as one way of improving prognosis after cancer is

  17. Accurate Online Full Charge Capacity Modeling of Smartphone Batteries

    OpenAIRE

    Hoque, Mohammad A.; Siekkinen, Matti; Koo, Jonghoe; Tarkoma, Sasu

    2016-01-01

    Full charge capacity (FCC) refers to the amount of energy a battery can hold. It is the fundamental property of smartphone batteries that diminishes as the battery ages and is charged/discharged. We investigate the behavior of smartphone batteries while charging and demonstrate that the battery voltage and charging rate information can together characterize the FCC of a battery. We propose a new method for accurately estimating FCC without exposing low-level system details or introducing new ...

  18. Association between blood cholesterol and sodium intake in hypertensive women with excess weight.

    Science.gov (United States)

    Padilha, Bruna Merten; Ferreira, Raphaela Costa; Bueno, Nassib Bezerra; Tassitano, Rafael Miranda; Holanda, Lidiana de Souza; Vasconcelos, Sandra Mary Lima; Cabral, Poliana Coelho

    2018-04-01

    Restricted sodium intake has been recommended for more than 1 century for the treatment of hypertension. However, restriction seems to increase blood cholesterol. In women with excess weight, blood cholesterol may increase even more because of insulin resistance and the high lipolytic activity of adipose tissue.The aim of this study was to assess the association between blood cholesterol and sodium intake in hypertensive women with and without excess weight.This was a cross-sectional study with hypertensive and nondiabetic women aged 20 to 59 years, recruited at the primary healthcare units of Maceio, Alagoas, Brazilian Northeast. Excess weight was defined as body mass index (BMI) ≥25.0 kg/m. Sodium intake was estimated by the 24-hour urinary excretion of sodium. Blood cholesterol was the primary outcome investigated by this study, and its relationship with sodium intake and other variables was assessed by Pearson correlation and multivariate linear regression using a significance level of 5%.This study included 165 hypertensive women. Of these, 135 (81.8%) were with excess weight. The mean sodium intake was 3.7 g (±1.9) and 3.4 g (±2.4) in hypertensive women with and without excess weight, respectively. The multiple normal linear regression models fitted to the "blood cholesterol" in the 2 groups reveal that for the group of hypertensive women without excess weight only 1 independent variable "age" is statistically significant to explain the variability of the blood cholesterol levels. However, for the group of hypertensive women with excess weight, 2 independent variables, age and sodium intake, can statistically explain variations of the blood cholesterol levels.Blood cholesterol is statistically inversely related to sodium intake for hypertensive women with excess weight, but it is not statistically related to sodium intake for hypertensive women without excess weight.

  19. Excessive sleep duration and quality of life.

    Science.gov (United States)

    Ohayon, Maurice M; Reynolds, Charles F; Dauvilliers, Yves

    2013-06-01

    Using population-based data, we document the comorbidities (medical, neurologic, and psychiatric) and consequences for daily functioning of excessive quantity of sleep (EQS), defined as a main sleep period or 24-hour sleep duration ≥ 9 hours accompanied by complaints of impaired functioning or distress due to excessive sleep, and its links to excessive sleepiness. A cross-sectional telephone study using a representative sample of 19,136 noninstitutionalized individuals living in the United States, aged ≥ 18 years (participation rate = 83.2%). The Sleep-EVAL expert system administered questions on life and sleeping habits; health; and sleep, mental, and organic disorders (Diagnostic and Statistical Manual of Mental Disorders, 4th edition, text revision; International Classification of Sleep Disorders: Diagnostic and Coding Manual II, International Classification of Diseases and Related Health Problems, 10th edition). Sleeping at least 9 hours per 24-hour period was reported by 8.4% (95% confidence interval = 8.0-8.8%) of participants; EQS (prolonged sleep episode with distress/impairment) was observed in 1.6% (1.4-1.8%) of the sample. The likelihood of EQS was 3 to 12× higher among individuals with a mood disorder. EQS individuals were 2 to 4× more likely to report poor quality of life than non-EQS individuals as well as interference with socioprofessional activities and relationships. Although between 33 and 66% of individuals with prolonged sleep perceived it as a major problem, only 6.3 to 27.5% of them reported having sought medical attention. EQS is widespread in the general population, co-occurring with a broad spectrum of sleep, medical, neurologic, and psychiatric disorders. Therefore, physicians must recognize EQS as a mixed clinical entity indicating careful assessment and specific treatment planning. © 2013 American Neurological Association.

  20. Severe excessive daytime sleepiness induced by hydroxyurea.

    Science.gov (United States)

    Revol, Bruno; Joyeux-Faure, Marie; Albahary, Marie-Victoire; Gressin, Remy; Mallaret, Michel; Pepin, Jean-Louis; Launois, Sandrine H

    2017-06-01

    Excessive daytime sleepiness (EDS) has been reported with many drugs, either as an extension of a hypnotic effect (e.g. central nervous system depressants) or as an idiosyncratic response of the patient. Here, we report unexpected and severe subjective and objective EDS induced by hydroxyurea therapy, with a favorable outcome after withdrawal. Clinical history, sleep log, polysomnography, and multiple sleep latency tests confirming the absence of other EDS causes are presented. © 2016 Société Française de Pharmacologie et de Thérapeutique.

  1. psi and excess leptons in photoproduction

    International Nuclear Information System (INIS)

    Ritson, D.M.

    1976-03-01

    The A-dependence of psi photoproduction was measured on beryllium and tantalum. From this it is found sigma/sub psi N/ = 2.75 +- 0.90 mb. A study was made of excess leptons relative to pion production in photoproduction. A μ/π ratio of 1.40 +- 0.25 x 10 -4 was found at 20 GeV incident photon energy. The energy dependence of psi photoproduction was determined and appeared to have a ''pseudo-threshold'' at 12 GeV

  2. Desaturation of excess intramyocellular triacylglycerol in obesity

    DEFF Research Database (Denmark)

    Haugaard, S B; Madsbad, S; Mu, Huiling

    2010-01-01

    , however, was increased twofold in obese women compared to obese men (Pfasting glucose (P...OBJECTIVE: Excess intramyocellular triacylglycerol (IMTG), found especially in obese women, is slowly metabolized and, therefore, prone to longer exposure to intracellular desaturases. Accordingly, it was hypothesized that IMTG content correlates inversely with IMTG fatty acid (FA) saturation...... in sedentary subjects. In addition, it was validated if IMTG palmitic acid is associated with insulin resistance as suggested earlier. DESIGN: Cross-sectional human study. SUBJECTS: In skeletal muscle biopsies, which were obtained from sedentary subjects (34 women, age 48+/-2 years (27 obese including 7 type 2...

  3. Conservative treatment of excessive anterior pelvic tilt

    DEFF Research Database (Denmark)

    Brekke, Anders Falk

    of Clinical Research, University of Southern Denmark, Denmark 3Department of Physiotherapy, University College Zealand, Denmark 4Center for Evidence-Based Medicine, Odense University Hospital, Denmark Correspondence Anders Falk Brekke E-mail: afbrekke@health.sdu.dk Mob: +45 7248 2626 Add: Sdr. Boulevard 29......Conservative treatment of excessive anterior pelvic tilt: A systematic review Anders Falk Brekke1,2,3, Søren Overgaard1,2, Asbjørn Hróbjartsson4, Anders Holsgaard-Larsen1,2 1Orthopaedic Research Unit, Department of Orthopaedic Surgery and Traumatology, Odense University Hospital 2Department...

  4. Evaluation of Excess Heat Utilization in District Heating Systems by Implementing Levelized Cost of Excess Heat

    Directory of Open Access Journals (Sweden)

    Borna Doračić

    2018-03-01

    Full Text Available District heating plays a key role in achieving high primary energy savings and the reduction of the overall environmental impact of the energy sector. This was recently recognized by the European Commission, which emphasizes the importance of these systems, especially when integrated with renewable energy sources, like solar, biomass, geothermal, etc. On the other hand, high amounts of heat are currently being wasted in the industry sector, which causes low energy efficiency of these processes. This excess heat can be utilized and transported to the final customer by a distribution network. The main goal of this research was to calculate the potential for excess heat utilization in district heating systems by implementing the levelized cost of excess heat method. Additionally, this paper proves the economic and environmental benefits of switching from individual heating solutions to a district heating system. This was done by using the QGIS software. The variation of different relevant parameters was taken into account in the sensitivity analysis. Therefore, the final result was the determination of the maximum potential distance of the excess heat source from the demand, for different available heat supplies, costs of pipes, and excess heat prices.

  5. Influences of optical-spectrum errors on excess relative intensity noise in a fiber-optic gyroscope

    Science.gov (United States)

    Zheng, Yue; Zhang, Chunxi; Li, Lijing

    2018-03-01

    The excess relative intensity noise (RIN) generated from broadband sources degrades the angular-random-walk performance of a fiber-optic gyroscope dramatically. Many methods have been proposed and managed to suppress the excess RIN. However, the properties of the excess RIN under the influences of different optical errors in the fiber-optic gyroscope have not been systematically investigated. Therefore, it is difficult for the existing RIN-suppression methods to achieve the optimal results in practice. In this work, the influences of different optical-spectrum errors on the power spectral density of the excess RIN are theoretically analyzed. In particular, the properties of the excess RIN affected by the raised-cosine-type ripples in the optical spectrum are elaborately investigated. Experimental measurements of the excess RIN corresponding to different optical-spectrum errors are in good agreement with our theoretical analysis, demonstrating its validity. This work provides a comprehensive understanding of the properties of the excess RIN under the influences of different optical-spectrum errors. Potentially, it can be utilized to optimize the configurations of the existing RIN-suppression methods by accurately evaluating the power spectral density of the excess RIN.

  6. Cancer incidence attributable to excess body weight in Alberta in 2012.

    Science.gov (United States)

    Brenner, Darren R; Poirier, Abbey E; Grundy, Anne; Khandwala, Farah; McFadden, Alison; Friedenreich, Christine M

    2017-04-28

    Excess body weight has been consistently associated with colorectal, breast, endometrial, esophageal, gall bladder, pancreatic and kidney cancers. The objective of this analysis was to estimate the proportion of total and site-specific cancers attributable to excess body weight in adults in Alberta in 2012. We estimated the proportions of attributable cancers using population attributable risk. Risk estimates were obtained from recent meta-analyses, and exposure prevalence estimates were obtained from the Canadian Community Health Survey. People with a body mass index of 25.00-29.99 kg/m2 and of 30 kg/m2 or more were categorized as overweight and obese, respectively. About 14%-47% of men and 9%-35% of women in Alberta were classified as either overweight or obese; the proportion increased with increasing age for both sexes. We estimate that roughly 17% and 12% of obesity-related cancers among men and women, respectively, could be attributed to excess body weight in Alberta in 2012. The heaviest absolute burden in terms of number of cases was seen for breast cancer among women and for colorectal cancer among men. Overall, about 5% of all cancers in adults in Alberta in 2012 were estimated to be attributable to excess body weight in 2000-2003. Excess body weight contributes to a substantial proportion of cases of cancers associated with overweight and obesity annually in Alberta. Strategies to improve energy imbalance and reduce the proportion of obese and overweight Albertans may have a notable impact on cancer incidence in the future. Copyright 2017, Joule Inc. or its licensors.

  7. Nonintrusive verification attributes for excess fissile materials

    International Nuclear Information System (INIS)

    Nicholas, N.J.; Eccleston, G.W.; Fearey, B.L.

    1997-10-01

    Under US initiatives, over two hundred metric tons of fissile materials have been declared to be excess to national defense needs. These excess materials are in both classified and unclassified forms. The US has expressed the intent to place these materials under international inspections as soon as practicable. To support these commitments, members of the US technical community are examining a variety of nonintrusive approaches (i.e., those that would not reveal classified or sensitive information) for verification of a range of potential declarations for these classified and unclassified materials. The most troublesome and potentially difficult issues involve approaches for international inspection of classified materials. The primary focus of the work to date has been on the measurement of signatures of relevant materials attributes (e.g., element, identification number, isotopic ratios, etc.), especially those related to classified materials and items. The authors are examining potential attributes and related measurement technologies in the context of possible verification approaches. The paper will discuss the current status of these activities, including their development, assessment, and benchmarking status

  8. Preferential solvation: dividing surface vs excess numbers.

    Science.gov (United States)

    Shimizu, Seishi; Matubayasi, Nobuyuki

    2014-04-10

    How do osmolytes affect the conformation and configuration of supramolecular assembly, such as ion channel opening and actin polymerization? The key to the answer lies in the excess solvation numbers of water and osmolyte molecules; these numbers are determinable solely from experimental data, as guaranteed by the phase rule, as we show through the exact solution theory of Kirkwood and Buff (KB). The osmotic stress technique (OST), in contrast, purposes to yield alternative hydration numbers through the use of the dividing surface borrowed from the adsorption theory. However, we show (i) OST is equivalent, when it becomes exact, to the crowding effect in which the osmolyte exclusion dominates over hydration; (ii) crowding is not the universal driving force of the osmolyte effect (e.g., actin polymerization); (iii) the dividing surface for solvation is useful only for crowding, unlike in the adsorption theory which necessitates its use due to the phase rule. KB thus clarifies the true meaning and limitations of the older perspectives on preferential solvation (such as solvent binding models, crowding, and OST), and enables excess number determination without any further assumptions.

  9. Control rod excess withdrawal prevention device

    International Nuclear Information System (INIS)

    Takayama, Yoshihito.

    1992-01-01

    Excess withdrawal of a control rod of a BWR type reactor is prevented. That is, the device comprises (1) a speed detector for detecting the driving speed of a control rod, (2) a judging circuit for outputting an abnormal signal if the driving speed is greater than a predetermined level and (3) a direction control valve compulsory closing circuit for controlling the driving direction of inserting and withdrawing a control rod based on an abnormal signal. With such a constitution, when the with drawing speed of a control rod is greater than a predetermined level, it is detected by the speed detector and the judging circuit. Then, all of the direction control valve are closed by way of the direction control valve compulsory closing circuit. As a result, the operation of the control rod is stopped compulsorily and the withdrawing speed of the control rod can be lowered to a speed corresponding to that upon gravitational withdrawal. Accordingly, excess withdrawal can be prevented. (I.S)

  10. Total-body sodium and sodium excess

    International Nuclear Information System (INIS)

    Aloia, J.F.; Cohn, S.H.; Abesamis, C.; Babu, T.; Zanzi, I.; Ellis, K.

    1980-01-01

    Total-body levels of sodium (TBNa), chlorine (TBCI), calcium (TBCa), and potassium (TBK) were measured by neutron activation and analysis of results by whole body counting in 66 postmenopausal women. The relationship between TBNa, and TBCl, TBK, and TBCa on the one hand, and height and weight on the other, were found to compare with those previously reported. The hypothesis that TBNa and TBCl are distributed normally could not be rejected. The sodium excess (Na/sub es/) is defined as the sodium that is present in excess of that associated with the extracellular fluid (chlorine) space; the Na/sub es/ approximates nonexchangeable bone sodium. In these 66 postmenopausal women, and in patients with different endocrinopathies previously described, the values on Na/sub es/ did not differ from the normal values except in the thyrotoxicosis patients, where they were decreased. A close relationship between Na/sub es/ and TBCa was maintained in the endocrinopathies studied. This relationship was found in conditions accompanied by either an increment or a loss of skeletal mass. It appears that the NA/sub es/ value is primarily dependent upon the calcium content of bone

  11. The excess radio background and fast radio transients

    International Nuclear Information System (INIS)

    Kehayias, John; Kephart, Thomas W.; Weiler, Thomas J.

    2015-01-01

    In the last few years ARCADE 2, combined with older experiments, has detected an additional radio background, measured as a temperature and ranging in frequency from 22 MHz to 10 GHz, not accounted for by known radio sources and the cosmic microwave background. One type of source which has not been considered in the radio background is that of fast transients (those with event times much less than the observing time). We present a simple estimate, and a more detailed calculation, for the contribution of radio transients to the diffuse background. As a timely example, we estimate the contribution from the recently-discovered fast radio bursts (FRBs). Although their contribution is likely 6 or 7 orders of magnitude too small (though there are large uncertainties in FRB parameters) to account for the ARCADE 2 excess, our development is general and so can be applied to any fast transient sources, discovered or yet to be discovered. We estimate parameter values necessary for transient sources to noticeably contribute to the radio background

  12. Passive dosing of pyrethroid insecticides to Daphnia magna: Expressing excess toxicity by chemical activity

    DEFF Research Database (Denmark)

    Nørgaard Schmidt, Stine; Gan, Jay; Kretschmann, A. C.

    2015-01-01

    ) Effective chemical activities resulting in 50% immobilisation (Ea50) will be estimated from pyrethroid EC50 values via the correlation of sub-cooled liquid solubility (S L, [mmol/L], representing a=1) and octanol to water partitioning ratios (Kow), (3) The excess toxicity observed for pyrethroids...

  13. Design flood hydrograph estimation procedure for small and fully-ungauged basins

    Science.gov (United States)

    Grimaldi, S.; Petroselli, A.

    2013-12-01

    The Rational Formula is the most applied equation in practical hydrology due to its simplicity and the effective compromise between theory and data availability. Although the Rational Formula is affected by several drawbacks, it is reliable and surprisingly accurate considering the paucity of input information. However, after more than a century, the recent computational, theoretical, and large-scale monitoring progresses compel us to try to suggest a more advanced yet still empirical procedure for estimating peak discharge in small and ungauged basins. In this contribution an alternative empirical procedure (named EBA4SUB - Event Based Approach for Small and Ungauged Basins) based on the common modelling steps: design hyetograph, rainfall excess, and rainfall-runoff transformation, is described. The proposed approach, accurately adapted for the fully-ungauged basin condition, provides a potentially better estimation of the peak discharge, a design hydrograph shape, and, most importantly, reduces the subjectivity of the hydrologist in its application.

  14. Molecular simulation of excess isotherm and excess enthalpy change in gas-phase adsorption.

    Science.gov (United States)

    Do, D D; Do, H D; Nicholson, D

    2009-01-29

    We present a new approach to calculating excess isotherm and differential enthalpy of adsorption on surfaces or in confined spaces by the Monte Carlo molecular simulation method. The approach is very general and, most importantly, is unambiguous in its application to any configuration of solid structure (crystalline, graphite layer or disordered porous glass), to any type of fluid (simple or complex molecule), and to any operating conditions (subcritical or supercritical). The behavior of the adsorbed phase is studied using the partial molar energy of the simulation box. However, to characterize adsorption for comparison with experimental data, the isotherm is best described by the excess amount, and the enthalpy of adsorption is defined as the change in the total enthalpy of the simulation box with the change in the excess amount, keeping the total number (gas + adsorbed phases) constant. The excess quantities (capacity and energy) require a choice of a reference gaseous phase, which is defined as the adsorptive gas phase occupying the accessible volume and having a density equal to the bulk gas density. The accessible volume is defined as the mean volume space accessible to the center of mass of the adsorbate under consideration. With this choice, the excess isotherm passes through a maximum but always remains positive. This is in stark contrast to the literature where helium void volume is used (which is always greater than the accessible volume) and the resulting excess can be negative. Our definition of enthalpy change is equivalent to the difference between the partial molar enthalpy of the gas phase and the partial molar enthalpy of the adsorbed phase. There is no need to assume ideal gas or negligible molar volume of the adsorbed phase as is traditionally done in the literature. We illustrate this new approach with adsorption of argon, nitrogen, and carbon dioxide under subcritical and supercritical conditions.

  15. Skills Training via Smartphone App for University Students with Excessive Alcohol Consumption: a Randomized Controlled Trial.

    Science.gov (United States)

    Gajecki, Mikael; Andersson, Claes; Rosendahl, Ingvar; Sinadinovic, Kristina; Fredriksson, Morgan; Berman, Anne H

    2017-10-01

    University students in a study on estimated blood alcohol concentration (eBAC) feedback apps were offered participation in a second study, if reporting continued excessive consumption at 6-week follow-up. This study evaluated the effects on excessive alcohol consumption of offering access to an additional skills training app. A total of 186 students with excessive alcohol consumption were randomized to an intervention group or a wait list group. Both groups completed online follow-ups regarding alcohol consumption after 6 and 12 weeks. Wait list participants were given access to the intervention at 6-week follow-up. Assessment-only controls (n = 144) with excessive alcohol consumption from the ongoing study were used for comparison. The proportion of participants with excessive alcohol consumption declined in both intervention and wait list groups compared to controls at first (p effects of app use from emailed feedback on excessive alcohol consumption and study participation. NCT02064998.

  16. Equipment upgrade - Accurate positioning of ion chambers

    International Nuclear Information System (INIS)

    Doane, Harry J.; Nelson, George W.

    1990-01-01

    Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described

  17. A multiple regression analysis for accurate background subtraction in 99Tcm-DTPA renography

    International Nuclear Information System (INIS)

    Middleton, G.W.; Thomson, W.H.; Davies, I.H.; Morgan, A.

    1989-01-01

    A technique for accurate background subtraction in 99 Tc m -DTPA renography is described. The technique is based on a multiple regression analysis of the renal curves and separate heart and soft tissue curves which together represent background activity. It is compared, in over 100 renograms, with a previously described linear regression technique. Results show that the method provides accurate background subtraction, even in very poorly functioning kidneys, thus enabling relative renal filtration and excretion to be accurately estimated. (author)

  18. 31 CFR 353.12 - Disposition of excess.

    Science.gov (United States)

    2010-07-01

    ... necessary to adjust the excess. Instructions for adjustment of the excess can be obtained by email at [email protected] or by writing to Bureau of the Public Debt, Parkersburg, WV 26106-1328. [68 FR 24805...

  19. Iodine Excess is a Risk Factor for Goiter Formation

    African Journals Online (AJOL)

    Key Words: Iodine excess, Goiter, Sub Saharan Africa. Iodine Excess is a ... synthesis leading to increased thyroid stimulating hormone ..... study done in Uganda revealed a similar picture ... significant association, probably due to recall bias.

  20. Excessive Fragmentary Myoclonus: What Do We Know?

    Directory of Open Access Journals (Sweden)

    Jiří Nepožitek

    2017-03-01

    Full Text Available Excessive fragmentary myoclonus (EFM is a polysomnographic finding registered by the surface electromyography (EMG and characterized as a result of the muscle activity consisting of sudden, isolated, arrhythmic, asynchronous and asymmetric brief twitches. The EMG potentials are defined by the exact criteria in The International Classification of the Sleep Disorders, 3rd edition and they appear with high intensity in all sleep stages. Clinical significance of EFM is unclear. It was observed in combination with other diseases and features such as obstructive and central sleep apnea, narcolepsy, periodic limb movements, insomnia, neurodegenerative disorders and peripheral nerve dysfunction. Relation to such wide range of diseases supports the opinion that EFM is nor a specific sleep disorder nor a specific polysomnographic sign. The option that EFM is a normal variant has also not been ruled out so far.

  1. Di-photon excess illuminates dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Backović, Mihailo [Center for Cosmology, Particle Physics and Phenomenology - CP3,Universite Catholique de Louvain, Louvain-la-neuve (Belgium); Mariotti, Alberto [Theoretische Natuurkunde and IIHE/ELEM, Vrije Universiteit Brussel,Pleinlaan 2, B-1050 Brussels (Belgium); International Solvay Institutes,Pleinlaan 2, B-1050 Brussels (Belgium); Redigolo, Diego [Laboratoire de Physique Théorique et Hautes Energies, CNRS UMR 7589,Universiteé Pierre et Marie Curie, 4 place Jussieu, F-75005, Paris (France)

    2016-03-22

    We propose a simplified model of dark matter with a scalar mediator to accommodate the di-photon excess recently observed by the ATLAS and CMS collaborations. Decays of the resonance into dark matter can easily account for a relatively large width of the scalar resonance, while the magnitude of the total width combined with the constraint on dark matter relic density leads to sharp predictions on the parameters of the Dark Sector. Under the assumption of a rather large width, the model predicts a signal consistent with ∼300 GeV dark matter particle and ∼750 GeV scalar mediator in channels with large missing energy. This prediction is not yet severely bounded by LHC Run I searches and will be accessible at the LHC Run II in the jet plus missing energy channel with more luminosity. Our analysis also considers astro-physical constraints, pointing out that future direct detection experiments will be sensitive to this scenario.

  2. Di-photon excess illuminates dark matter

    International Nuclear Information System (INIS)

    Backović, Mihailo; Mariotti, Alberto; Redigolo, Diego

    2016-01-01

    We propose a simplified model of dark matter with a scalar mediator to accommodate the di-photon excess recently observed by the ATLAS and CMS collaborations. Decays of the resonance into dark matter can easily account for a relatively large width of the scalar resonance, while the magnitude of the total width combined with the constraint on dark matter relic density leads to sharp predictions on the parameters of the Dark Sector. Under the assumption of a rather large width, the model predicts a signal consistent with ∼300 GeV dark matter particle and ∼750 GeV scalar mediator in channels with large missing energy. This prediction is not yet severely bounded by LHC Run I searches and will be accessible at the LHC Run II in the jet plus missing energy channel with more luminosity. Our analysis also considers astro-physical constraints, pointing out that future direct detection experiments will be sensitive to this scenario.

  3. Research of Precataclysmic Variables with Radius Excesses

    Science.gov (United States)

    Deminova, N. R.; Shimansky, V. V.; Borisov, N. V.; Gabdeev, M. M.; Shimanskaya, N. N.

    2017-06-01

    The results of spectroscopic observations of the pre-cataclysmic variable NSVS 14256825, which is a HW Vir binary system, were analyzed. The chemical composition is determined, the radial velocities and equivalent widths of a given star are measured. The fundamental parameters of the components were determined (R1 = 0.166 R⊙ , M2 = 0.100 M⊙ , R2 = 0.122 R⊙). It is shown that the secondary component has a mass close to the mass of brown dwarfs. A comparison of two close binary systems is made: HS 2333 + 3927 and NSVS 14256825. A radius-to-mass relationship for the secondary components of the studied pre-cataclysmic variables is constructed. It is concluded that an excess of radii relative to model predictions for MS stars is observed in virtually all systems.

  4. Excess plutonium disposition using ALWR technology

    International Nuclear Information System (INIS)

    Phillips, A.; Buckner, M.R.; Radder, J.A.; Angelos, J.G.; Inhaber, H.

    1993-02-01

    The Office of Nuclear Energy of the Department of Energy chartered the Plutonium Disposition Task Force in August 1992. The Task Force was created to assess the range of practicable means of disposition of excess weapons-grade plutonium. Within the Task Force, working groups were formed to consider: (1) storage, (2) disposal,and(3) fission options for this disposition,and a separate group to evaluate nonproliferation concerns of each of the alternatives. As a member of the Fission Working Group, the Savannah River Technology Center acted as a sponsor for light water reactor (LWR) technology. The information contained in this report details the submittal that was made to the Fission Working Group of the technical assessment of LWR technology for plutonium disposition. The following aspects were considered: (1) proliferation issues, (2) technical feasibility, (3) technical availability, (4) economics, (5) regulatory issues, and (6) political acceptance

  5. Propylene Glycol Poisoning From Excess Whiskey Ingestion

    Directory of Open Access Journals (Sweden)

    Courtney A. Cunningham MD

    2015-09-01

    Full Text Available In this report, we describe a case of high anion gap metabolic acidosis with a significant osmolal gap attributed to the ingestion of liquor containing propylene glycol. Recently, several reports have characterized severe lactic acidosis occurring in the setting of iatrogenic unintentional overdosing of medications that use propylene glycol as a diluent, including lorazepam and diazepam. To date, no studies have explored potential effects of excess propylene glycol in the setting of alcohol intoxication. Our patient endorsed drinking large volumes of cinnamon flavored whiskey, which was likely Fireball Cinnamon Whisky. To our knowledge, this is the first case of propylene glycol toxicity from an intentional ingestion of liquor containing propylene glycol.

  6. [Impact on the development of parental awareness of excess weight in children].

    Science.gov (United States)

    Łupińska, Anna; Chlebna-Sokół, Danuta

    2015-04-01

    A lot of publications emphasize the special role of parents' eating habits and their lifestyle on the prevalence of excess body weight in children. The aim of this study was to answer the question whether parents of children who are overweight and obese are aware of this problem and what factors affect their perception of the excess body weight degree in their offspring. The study included 137 children aged 6,5- 13,5 years. 23 respondents were overweight and 76 obese. Compared group consisted of 113 children. All patients underwent physical examination with anthropometric measurements. Parents were asked to complete a questionnaire, where they evaluated the degree of excess body weight of their child. We also asked about both parents' weight and body height, their education and chronic diseases occurring in the family. In the group of obese children 56.2% of the respondents came from families where one parent had excess body weight while 32.9% of them from families where this problem affected both parents. In 51.3% of patients with a body mass index (BMI) above 95 centil, parents wrongly assessed the degree of excess body weight of their child, in overweight group this proportion accounted for 8.7%. There was a statistically significant (p = 0.007) correlation between the degree of children's excess body weight and the ability of parents to estimate that. Parents' education had no influence on the incidence of excess body weight in children and their ability to determine its extent. In the group of obese and overweight children only 4% of parents recognized obesity as a chronic disease. Parents of children who are overweight and obese have lower awareness about their child's weight in comparison to parents of children with normal weight. There is a statistical correlation between parents' perception of excess body weight and the development of obesity in children. © 2015 MEDPRESS.

  7. Prevalence of functional disorders of androgen excess in unselected premenopausal women: a study in blood donors.

    Science.gov (United States)

    Sanchón, Raúl; Gambineri, Alessandra; Alpañés, Macarena; Martínez-García, M Ángeles; Pasquali, Renato; Escobar-Morreale, Héctor F

    2012-04-01

    The polycystic ovary syndrome (PCOS) is one of the most common endocrine disorders in women. On the contrary, the prevalences of other disorders of androgen excess such as idiopathic hyperandrogenism and idiopathic hirsutism remain unknown. We aimed to obtain an unbiased estimate of the prevalence in premenopausal women of (i) signs of androgen excess and (ii) PCOS, idiopathic hyperandrogenism and idiopathic hirsutism. A multicenter prevalence survey included 592 consecutive premenopausal women (393 from Madrid, Spain and 199 from Bologna, Italy) reporting spontaneously for blood donation. Immediately before donation, we conducted clinical and biochemical phenotyping for androgen excess disorders. We determined the prevalence of (i) hirsutism, acne and alopecia as clinical signs of androgen excess and (ii) functional disorders of androgen excess, including PCOS, defined by the National Institute of Child Health and Human Development/National Institute of Health criteria, idiopathic hyperandrogenism and idiopathic hirsutism. Regarding clinical signs of hyperandrogenism, hirsutism and acne were equally frequent [12.2% prevalence; 95% confidence interval (CI): 9.5-14.8%], whereas alopecia was uncommon (1.7% prevalence, 95% CI: 0.7-2.7%). Regarding functional disorders of androgen excess, PCOS and idiopathic hirsutism were equally frequent (5.4% prevalence, 95% CI: 3.6-7.2) followed by idiopathic hyperandrogenism (3.9% prevalence, 95% CI: 2.3-5.4). Clinical signs of hyperandrogenism and functional disorders of androgen excess show a high prevalence in premenopausal women. The prevalences of idiopathic hyperandrogenism and idiopathic hirsutism are similar to that of PCOS, highlighting the need for further research on the pathophysiology, consequences for health and clinical implications of these functional forms of androgen excess.

  8. Origin of Tungsten Excess in Komatiites

    Science.gov (United States)

    Becker, H.; Brandon, A. D.; Walker, R. J.

    2004-12-01

    The limited database available for W abundances in komatiites (n=7, Newsom et al., 1996) suggests that when melting and fractional crystallization effects are filtered out, these komatiites have about 10 times higher W, compared to other mantle-derived mafic-ultramafic magmas (MORB, OIB). The excess of W in the komatiites relative to lithophile highly incompatible elements becomes obvious when compared with the low concentrations of the light REE Ce and Nd (about 1-2 ug/g in many komatiites, compared to > 10 ug/g in most MORB and OIB). In order to increase the komatiite W database, komatiite samples from Phanerozoic (Gorgona Island) and Archean terraines (Boston Creek/Canada, Belingwe/South Africa, 2.7 Ga) were dissolved and W was separated in order to obtain W concentrations by isotope dilution. Except for one sample from Gorgona Island with low W (23 ng/g), samples from all three locales show high W (516 to 2643 ng/g), with most samples containing near 700 ng/g W. Three Hawaiian picrites (H23, LO-02-04, MK-1-6) were also analyzed for comparative purposes and contain 75, 163 and 418 ng/g W, respectively. The W concentrations in the Hawaiian picrites are comparable or lower than W concentrations in Hawaiian tholeiites (Newsom et al., 1996). Mass balance considerations suggest that it is unlikely that the W excess in komatiites reflects W contributions to the mantle sources of komatiites from the outer core. The W enrichment could result from shallow-level alteration processes if primary W abundances of komatiites were low and W was added via fluids, containing W and other fluid-mobile elements derived from crustal rocks. Because most W in such samples would be of crustal origin, small contributions from the outer core may be difficult to detect using 182W systematics (Schersten et al., 2003).

  9. Vitamin paradox in obesity: Deficiency or excess?

    Science.gov (United States)

    Zhou, Shi-Sheng; Li, Da; Chen, Na-Na; Zhou, Yiming

    2015-08-25

    Since synthetic vitamins were used to fortify food and as supplements in the late 1930s, vitamin intake has significantly increased. This has been accompanied by an increased prevalence of obesity, a condition associated with diabetes, hypertension, cardiovascular disease, asthma and cancer. Paradoxically, obesity is often associated with low levels of fasting serum vitamins, such as folate and vitamin D. Recent studies on folic acid fortification have revealed another paradoxical phenomenon: obesity exhibits low fasting serum but high erythrocyte folate concentrations, with high levels of serum folate oxidation products. High erythrocyte folate status is known to reflect long-term excess folic acid intake, while increased folate oxidation products suggest an increased folate degradation because obesity shows an increased activity of cytochrome P450 2E1, a monooxygenase enzyme that can use folic acid as a substrate. There is also evidence that obesity increases niacin degradation, manifested by increased activity/expression of niacin-degrading enzymes and high levels of niacin metabolites. Moreover, obesity most commonly occurs in those with a low excretory reserve capacity (e.g., due to low birth weight/preterm birth) and/or a low sweat gland activity (black race and physical inactivity). These lines of evidence raise the possibility that low fasting serum vitamin status in obesity may be a compensatory response to chronic excess vitamin intake, rather than vitamin deficiency, and that obesity could be one of the manifestations of chronic vitamin poisoning. In this article, we discuss vitamin paradox in obesity from the perspective of vitamin homeostasis.

  10. Excessive anticoagulation with warfarin or phenprocoumon may have multiple causes

    DEFF Research Database (Denmark)

    Meegaard, Peter Martin; Holck, Line H V; Pottegård, Anton

    2012-01-01

    Excessive anticoagulation with vitamin K antagonists is a serious condition with a substantial risk of an adverse outcome. We thus found it of interest to review a large case series to characterize the underlying causes of excessive anticoagulation.......Excessive anticoagulation with vitamin K antagonists is a serious condition with a substantial risk of an adverse outcome. We thus found it of interest to review a large case series to characterize the underlying causes of excessive anticoagulation....

  11. Rough Electrode Creates Excess Capacitance in Thin-Film Capacitors.

    Science.gov (United States)

    Torabi, Solmaz; Cherry, Megan; Duijnstee, Elisabeth A; Le Corre, Vincent M; Qiu, Li; Hummelen, Jan C; Palasantzas, George; Koster, L Jan Anton

    2017-08-16

    The parallel-plate capacitor equation is widely used in contemporary material research for nanoscale applications and nanoelectronics. To apply this equation, flat and smooth electrodes are assumed for a capacitor. This essential assumption is often violated for thin-film capacitors because the formation of nanoscale roughness at the electrode interface is very probable for thin films grown via common deposition methods. In this work, we experimentally and theoretically show that the electrical capacitance of thin-film capacitors with realistic interface roughness is significantly larger than the value predicted by the parallel-plate capacitor equation. The degree of the deviation depends on the strength of the roughness, which is described by three roughness parameters for a self-affine fractal surface. By applying an extended parallel-plate capacitor equation that includes the roughness parameters of the electrode, we are able to calculate the excess capacitance of the electrode with weak roughness. Moreover, we introduce the roughness parameter limits for which the simple parallel-plate capacitor equation is sufficiently accurate for capacitors with one rough electrode. Our results imply that the interface roughness beyond the proposed limits cannot be dismissed unless the independence of the capacitance from the interface roughness is experimentally demonstrated. The practical protocols suggested in our work for the reliable use of the parallel-plate capacitor equation can be applied as general guidelines in various fields of interest.

  12. Can blind persons accurately assess body size from the voice?

    Science.gov (United States)

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).

  13. Excessive Testing and Pupils in the Public Schools

    Science.gov (United States)

    Ediger, Marlow

    2017-01-01

    This article explores the question of excessive testing in public schools, its value in the educational process, and the impact that excessive testing may have on the student and the family unit. While assessments are valuable when used properly, excessive testing may lead to problems with unforeseen consequences.

  14. A COMPREHENSIVE CENSUS OF NEARBY INFRARED EXCESS STARS

    Energy Technology Data Exchange (ETDEWEB)

    Cotten, Tara H.; Song, Inseok, E-mail: tara@physast.uga.edu, E-mail: song@physast.uga.edu [Department of Physics and Astronomy, University of Georgia, Athens, GA 30602 (United States)

    2016-07-01

    The conclusion of the Wide-Field Infrared Survey Explorer ( WISE ) mission presents an opportune time to summarize the history of using excess emission in the infrared as a tracer of circumstellar material and exploit all available data for future missions such as the James Webb Space Telescope . We have compiled a catalog of infrared excess stars from peer-reviewed articles and perform an extensive search for new infrared excess stars by cross-correlating the Tycho-2 and all-sky WISE (AllWISE) catalogs. We define a significance of excess in four spectral type divisions and select stars showing greater than either 3 σ or 5 σ significance of excess in the mid- and far-infrared. Through procedures including spectral energy distribution fitting and various image analyses, each potential excess source was rigorously vetted to eliminate false positives. The infrared excess stars from the literature and the new stars found through the Tycho-2 and AllWISE cross-correlation produced nearly 500 “Prime” infrared excess stars, of which 74 are new sources of excess, and >1200 are “Reserved” stars, of which 950 are new sources of excess. The main catalog of infrared excess stars are nearby, bright, and either demonstrate excess in more than one passband or have infrared spectroscopy confirming the infrared excess. This study identifies stars that display a spectral energy distribution suggestive of a secondary or post-protoplanetary generation of dust, and they are ideal targets for future optical and infrared imaging observations. The final catalogs of stars summarize the past work using infrared excess to detect dust disks, and with the most extensive compilation of infrared excess stars (∼1750) to date, we investigate various relationships among stellar and disk parameters.

  15. EXCESS RF POWER REQUIRED FOR RF CONTROL OF THE SPALLATION NEUTRON SOURCE (SNS) LINAC, A PULSED HIGH-INTENSITY SUPERCONDUCTING PROTON ACCELERATOR

    International Nuclear Information System (INIS)

    Lynch, M.; Kwon, S.

    2001-01-01

    A high-intensity proton linac, such as that being planned for the SNS, requires accurate RF control of cavity fields for the entire pulse in order to avoid beam spill. The current design requirement for the SNS is RF field stability within ±0.5% and ±0.5 o [1]. This RF control capability is achieved by the control electronics using the excess RF power to correct disturbances. To minimize the initial capital costs, the RF system is designed with 'just enough' RF power. All the usual disturbances exist, such as beam noise, klystron/HVPS noise, coupler imperfections, transport losses, turn-on and turn-off transients, etc. As a superconducting linac, there are added disturbances of large magnitude, including Lorentz detuning and microphonics. The effects of these disturbances and the power required to correct them are estimated, and the result shows that the highest power systems in the SNS have just enough margin, with little or no excess margin

  16. ATLAS Z Excess in Minimal Supersymmetric Standard Model

    International Nuclear Information System (INIS)

    Lu, Xiaochuan; Terada, Takahiro

    2015-06-01

    Recently the ATLAS collaboration reported a 3 sigma excess in the search for the events containing a dilepton pair from a Z boson and large missing transverse energy. Although the excess is not sufficiently significant yet, it is quite tempting to explain this excess by a well-motivated model beyond the standard model. In this paper we study a possibility of the minimal supersymmetric standard model (MSSM) for this excess. Especially, we focus on the MSSM spectrum where the sfermions are heavier than the gauginos and Higgsinos. We show that the excess can be explained by the reasonable MSSM mass spectrum.

  17. 75 FR 30846 - Monthly Report of Excess Income and Annual Report of Uses of Excess Income (Correction)

    Science.gov (United States)

    2010-06-02

    ... Income and Annual Report of Uses of Excess Income (Correction) AGENCY: Office of the Chief Information.... Project owners are permitted to retain Excess Income for projects under terms and conditions established by HUD. Owners must request to retain some or all of their Excess Income. The request must be...

  18. Dark matter "transporting" mechanism explaining positron excesses

    Science.gov (United States)

    Kim, Doojin; Park, Jong-Chul; Shin, Seodong

    2018-04-01

    We propose a novel mechanism to explain the positron excesses, which are observed by satellite-based telescopes including PAMELA and AMS-02, in dark matter (DM) scenarios. The novelty behind the proposal is that it makes direct use of DM around the Galactic Center where DM populates most densely, allowing us to avoid tensions from cosmological and astrophysical measurements. The key ingredients of this mechanism include DM annihilation into unstable states with a very long laboratory-frame life time and their "retarded" decay near the Earth to electron-positron pair(s) possibly with other (in)visible particles. We argue that this sort of explanation is not in conflict with relevant constraints from big bang nucleosynthesis and cosmic microwave background. Regarding the resultant positron spectrum, we provide a generalized source term in the associated diffusion equation, which can be readily applicable to any type of two-"stage" DM scenarios wherein production of Standard Model particles occurs at completely different places from those of DM annihilation. We then conduct a data analysis with the recent AMS-02 data to validate our proposal.

  19. Medical Ethics and Protection from Excessive Radiation

    International Nuclear Information System (INIS)

    Ruzicka, I.

    1998-01-01

    Among artificial sources of ionic radiation people are most often exposed to those emanating from X-ray diagnostic equipment. However, responsible usage of X-ray diagnostic methods may considerably reduce the general exposure to radiation. A research on rational access to X-ray diagnostic methods conducted at the X-ray Cabinet of the Tresnjevka Health Center was followed by a control survey eight years later of the rational methods applied, which showed that the number of unnecessary diagnostic examining was reduced for 34 % and the diagnostic indications were 10-40 $ more precise. The results therefore proved that radiation problems were reduced accordingly. The measures applied consisted of additional training organized for health care workers and a better education of the population. The basic element was then the awareness of both health care workers and the patients that excessive radiation should be avoided. The condition for achieving this lies in the moral responsibility of protecting the patients' health. A radiologist, being the person that promotes and carries out this moral responsibility, should organize and hold continual additional training of medical doctors, as well as education for the patients, and apply modern equipment. The basis of such an approach should be established by implementing medical ethics at all medical schools and faculties, together with the promotion of a wider intellectual and moral integrity of each medical doctor. (author)

  20. Iron excess in recreational marathon runners.

    Science.gov (United States)

    Mettler, S; Zimmermann, M B

    2010-05-01

    Iron deficiency and anemia may impair athletic performance, and iron supplements are commonly consumed by athletes. However, iron overload should be avoided because of the possible long-term adverse health effects. We investigated the iron status of 170 male and female recreational runners participating in the Zürich marathon. Iron deficiency was defined either as a plasma ferritin (PF) concentration or =4.5 (functional iron deficiency). After excluding subjects with elevated C-reactive protein concentrations, iron overload was defined as PF >200 microg/l. Iron depletion was found in only 2 out of 127 men (1.6% of the male study population) and in 12 out of 43 (28.0%) women. Functional iron deficiency was found in 5 (3.9%) and 11 (25.5%) male and female athletes, respectively. Body iron stores, calculated from the sTfR/PF ratio, were significantly higher (Pmarathon runners. Median PF among males was 104 microg/l, and the upper limit of the PF distribution in males was 628 microg/l. Iron overload was found in 19 out of 127 (15.0%) men but only 2 out of 43 in women (4.7%). Gender (male sex), but not age, was a predictor of higher PF (Pperformance, our findings indicate excess body iron may be common in male recreational runners and suggest supplements should only be used if tests of iron status indicate deficiency.

  1. Excessive Neural Responses and Visual Discomfort

    Directory of Open Access Journals (Sweden)

    L O'Hare

    2014-08-01

    Full Text Available Spatially and temporally periodic patterns can look aversive to some individuals (Wilkins et al, 1984, Brain, 107, 989-1017, especially clinical populations such as migraine (Marcus and Soso, 1989, Arch Neurol., 46(10, 1129-32 epilepsy (Wilkins, Darby and Binnie, 1979, Brain, 102, 1-25. It has been suggested that this might be due to excessive neural responses (Juricevic, Land, Wilkins and Webster, 2010, Perception, 39(7, 884-899. Spatial frequency content has been shown to affect both relative and absolute discomfort judgements for spatially periodic riloid stimuli (Clark, O'Hare and Hibbard, 2013, Perception, ECVP Supp.; O'Hare, Clark and Hibbard, 2013, Perception ECVP Supplement. The current study investigated the possibility of whether neural correlates of visual discomfort from periodic stimuli could be measured using EEG. Stimuli were first matched for perceived contrast using a self adjustment task. EEG measurements were then obtained, alongside subjective discomfort judgements. Subjective discomfort judgements support those found previously, under various circumstances, indicating that spatial frequency plays a role in the perceived discomfort of periodic images. However, trends in EEG responses do not appear to have a straightforward relationship to subjective discomfort judgements.

  2. What controls deuterium excess in global precipitation?

    Directory of Open Access Journals (Sweden)

    S. Pfahl

    2014-04-01

    Full Text Available The deuterium excess (d of precipitation is widely used in the reconstruction of past climatic changes from ice cores. However, its most common interpretation as moisture source temperature cannot directly be inferred from present-day water isotope observations. Here, we use a new empirical relation between d and near-surface relative humidity (RH together with reanalysis data to globally predict d of surface evaporation from the ocean. The very good quantitative agreement of the predicted hemispherically averaged seasonal cycle with observed d in precipitation indicates that moisture source relative humidity, and not sea surface temperature, is the main driver of d variability on seasonal timescales. Furthermore, we review arguments for an interpretation of long-term palaeoclimatic d changes in terms of moisture source temperature, and we conclude that there remains no sufficient evidence that would justify to neglect the influence of RH on such palaeoclimatic d variations. Hence, we suggest that either the interpretation of d variations in palaeorecords should be adapted to reflect climatic influences on RH during evaporation, in particular atmospheric circulation changes, or new arguments for an interpretation in terms of moisture source temperature will have to be provided based on future research.

  3. Complementary technologies for verification of excess plutonium

    International Nuclear Information System (INIS)

    Langner, D.G.; Nicholas, N.J.; Ensslin, N.; Fearey, B.L.; Mitchell, D.J.; Marlow, K.W.; Luke, S.J.; Gosnell, T.B.

    1998-01-01

    Three complementary measurement technologies have been identified as candidates for use in the verification of excess plutonium of weapons origin. These technologies: high-resolution gamma-ray spectroscopy, neutron multiplicity counting, and low-resolution gamma-ray spectroscopy, are mature, robust technologies. The high-resolution gamma-ray system, Pu-600, uses the 630--670 keV region of the emitted gamma-ray spectrum to determine the ratio of 240 Pu to 239 Pu. It is useful in verifying the presence of plutonium and the presence of weapons-grade plutonium. Neutron multiplicity counting is well suited for verifying that the plutonium is of a safeguardable quantity and is weapons-quality material, as opposed to residue or waste. In addition, multiplicity counting can independently verify the presence of plutonium by virtue of a measured neutron self-multiplication and can detect the presence of non-plutonium neutron sources. The low-resolution gamma-ray spectroscopic technique is a template method that can provide continuity of knowledge that an item that enters the a verification regime remains under the regime. In the initial verification of an item, multiple regions of the measured low-resolution spectrum form a unique, gamma-radiation-based template for the item that can be used for comparison in subsequent verifications. In this paper the authors discuss these technologies as they relate to the different attributes that could be used in a verification regime

  4. Cool WISPs for stellar cooling excesses

    Energy Technology Data Exchange (ETDEWEB)

    Giannotti, Maurizio [Physical Sciences, Barry University, 11300 NE 2nd Avenue, Miami Shores, FL 33161 (United States); Irastorza, Igor; Redondo, Javier [Departamento de Física Teórica, Universidad de Zaragoza, Pedro Cerbuna 12, E-50009, Zaragoza, España (Spain); Ringwald, Andreas, E-mail: mgiannotti@barry.edu, E-mail: igor.irastorza@cern.ch, E-mail: jredondo@unizar.es, E-mail: andreas.ringwald@desy.de [Theory group, Deutsches Elektronen-Synchrotron DESY, Notkestraße 85, D-22607 Hamburg (Germany)

    2016-05-01

    Several stellar systems (white dwarfs, red giants, horizontal branch stars and possibly the neutron star in the supernova remnant Cassiopeia A) show a mild preference for a non-standard cooling mechanism when compared with theoretical models. This exotic cooling could be provided by Weakly Interacting Slim Particles (WISPs), produced in the hot cores and abandoning the star unimpeded, contributing directly to the energy loss. Taken individually, these excesses do not show a strong statistical weight. However, if one mechanism could consistently explain several of them, the hint could be significant. We analyze the hints in terms of neutrino anomalous magnetic moments, minicharged particles, hidden photons and axion-like particles (ALPs). Among them, the ALP or a massless HP represent the best solution. Interestingly, the hinted ALP parameter space is accessible to the next generation proposed ALP searches, such as ALPS II and IAXO and the massless HP requires a multi TeV energy scale of new physics that might be accessible at the LHC.

  5. Cool WISPs for stellar cooling excesses

    International Nuclear Information System (INIS)

    Giannotti, Maurizio; Irastorza, Igor; Redondo, Javier; Ringwald, Andreas

    2016-01-01

    Several stellar systems (white dwarfs, red giants, horizontal branch stars and possibly the neutron star in the supernova remnant Cassiopeia A) show a mild preference for a non-standard cooling mechanism when compared with theoretical models. This exotic cooling could be provided by Weakly Interacting Slim Particles (WISPs), produced in the hot cores and abandoning the star unimpeded, contributing directly to the energy loss. Taken individually, these excesses do not show a strong statistical weight. However, if one mechanism could consistently explain several of them, the hint could be significant. We analyze the hints in terms of neutrino anomalous magnetic moments, minicharged particles, hidden photons and axion-like particles (ALPs). Among them, the ALP or a massless HP represent the best solution. Interestingly, the hinted ALP parameter space is accessible to the next generation proposed ALP searches, such as ALPS II and IAXO and the massless HP requires a multi TeV energy scale of new physics that might be accessible at the LHC.

  6. Cryolipolysis for reduction of excess adipose tissue.

    Science.gov (United States)

    Nelson, Andrew A; Wasserman, Daniel; Avram, Mathew M

    2009-12-01

    Controlled cold exposure has long been reported to be a cause of panniculitis in cases such as popsicle panniculitis. Cryolipolysis is a new technology that uses cold exposure, or energy extraction, to result in localized panniculitis and modulation of fat. Presently, the Zeltiq cryolipolysis device is FDA cleared for skin cooling, as well as various other indications, but not for lipolysis. There is, however, a pending premarket notification for noninvasive fat layer reduction. Initial animal and human studies have demonstrated significant reductions in the superficial fat layer thickness, ranging from 20% to 80%, following a single cryolipolysis treatment. The decrease in fat thickness occurs gradually over the first 3 months following treatment, and is most pronounced in patients with limited, discrete fat bulges. Erythema of the skin, bruising, and temporary numbness at the treatment site are commonly observed following treatment with the device, though these effects largely resolve in approximately 1 week. To date, there have been no reports of scarring, ulceration, or alterations in blood lipid or liver function profiles. Cryolipolysis is a new, noninvasive treatment option that may be of benefit in the treatment of excess adipose tissue.

  7. Isotope effect and deuterium excess parameter revolution in ice and snow melt

    International Nuclear Information System (INIS)

    Yin Guan; Ni Shijun; Fan Xiao; Wu Hao

    2003-01-01

    The change of water isotope composition actually is a integrated reaction depending on the change of environment. The ice and snow melt of different seasons in high mountain can obviously influence the change of isotope composition and deuterium excess parameter of surface flow and shallow groundwater. To know the isotopic fractionation caused by this special natural background, explore its forming and evolvement, is unusually important for estimating, the relationship between the environment, climate and water resources in an area. Taking the example of isotope composition of surface flow and shallow groundwater in Daocheng, Sichuan, this paper mainly introduced the changing law of isotope composition and deuterium excess parameter of surface flow and hot-spring on conditions of ice and snow melt with different seasons in high mountain; emphatically discussed the isotope effect and deuterium excess parameter revolution in the process of ice and snow melting and its reason. (authors)

  8. Di-photon excess at LHC and the gamma ray excess at the Galactic Centre

    Energy Technology Data Exchange (ETDEWEB)

    Hektor, Andi [National Institute of Chemical Physics and Biophysics,Rävala pst. 10, 10143 Tallinn (Estonia); Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala pst. 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia)

    2016-07-25

    Motivated by the recent indications for a 750 GeV resonance in the di-photon final state at the LHC, in this work we analyse the compatibility of the excess with the broad photon excess detected at the Galactic Centre. Intriguingly, by analysing the parameter space of an effective models where a 750 GeV pseudoscalar particles mediates the interaction between the Standard Model and a scalar dark sector, we prove the compatibility of the two signals. We show, however, that the LHC mono-jet searches and the Fermi LAT measurements strongly limit the viable parameter space. We comment on the possible impact of cosmic antiproton flux measurement by the AMS-02 experiment.

  9. Exploring the relationship between sequence similarity and accurate phylogenetic trees.

    Science.gov (United States)

    Cantarel, Brandi L; Morrison, Hilary G; Pearson, William

    2006-11-01

    significantly decrease phylogenetic accuracy. In general, although less-divergent sequence families produce more accurate trees, the likelihood of estimating an accurate tree is most dependent on whether radiation in the family was ancient or recent. Accuracy can be improved by combining genes from the same organism when creating species trees or by selecting protein families with the best bootstrap values in comprehensive studies.

  10. The Impact of Nonequilibrium and Equilibrium Fractionation on Two Different Deuterium Excess Definitions

    Science.gov (United States)

    Dütsch, Marina; Pfahl, Stephan; Sodemann, Harald

    2017-12-01

    The deuterium excess (d) is a useful measure for nonequilibrium effects of isotopic fractionation and can therefore provide information about the meteorological conditions in evaporation regions or during ice cloud formation. In addition to nonequilibrium fractionation, two other effects can change d during phase transitions. The first is the dependence of the equilibrium fractionation factors on temperature, and the second is the nonlinearity of the δ scale on which d is defined. The second effect can be avoided by using an alternative definition that is based on the logarithmic scale. However, in this case d is not conserved when air parcels mix, which can lead to changes without phase transitions. Here we provide a systematic analysis of the benefits and limitations of both deuterium excess definitions by separately quantifying the impact of the nonequilibrium effect, the temperature effect, the δ-scale effect, and the mixing effect in a simple Rayleigh model simulating the isotopic composition of air parcels during moist adiabatic ascent. The δ-scale effect is important in depleted air parcels, for which it can change the sign of the traditional deuterium excess in the remaining vapor from negative to positive. The alternative definition mainly reflects the nonequilibrium and temperature effect, while the mixing effect is about 2 orders of magnitude smaller. Thus, the alternative deuterium excess definition appears to be a more accurate measure for nonequilibrium effects in situations where moisture is depleted and the δ-scale effect is large, for instance, at high latitudes or altitudes.

  11. More accurate picture of human body organs

    International Nuclear Information System (INIS)

    Kolar, J.

    1985-01-01

    Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)

  12. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  13. Accurate activity recognition in a home setting

    NARCIS (Netherlands)

    van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B.

    2008-01-01

    A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its

  14. Highly accurate surface maps from profilometer measurements

    Science.gov (United States)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  15. Study on the Effectiveness of Infiltration Wells to Reduce Excess Surface Run Off In ITB

    Directory of Open Access Journals (Sweden)

    Mardiah Afifah Muhsinatu

    2018-01-01

    Full Text Available Institut Teknologi Bandung (ITB, Ganesha Campus, Indonesia, has an area of 28.86 hectares. The campus is located in Bandung. Starting from 2012, new buildings were constructed within the area, reducing the area of permeable surface significantly. In the past few years, there were several excess run off incidents in the campus. The insufficient area of permeable surface as well as the inadequate capacity of the drainage system contributes to the excess surface run off. The drainage system has only two outlets. Moreover, in some areas, the drainage systems are disconnected. Thus, most the surface run off are stored within the drainage system. The purpose of this study is to evaluate the effectiveness of infiltration wells for reducing the local excess run off in ITB. Precipitation data and drained service area are used to estimate the design discharge from each building in ITB. In order to avoid the excess surface run off of certain locations in ITB, then the infiltration wells are proposed to balance the area of impermeable surface. The effectiveness of the infiltration wells are evaluated by assessing their number to their contribution in reducing the excess surface runs off.

  16. Dynamics of the G-excess illusion

    Science.gov (United States)

    Baylor, K. A.; Reschke, M.; Guedry, F. E.; Mcgrath, B. J.; Rupert, A. H.

    1992-01-01

    The G-excess illusion is increasingly recognized as a cause of aviation mishaps especially when pilots perform high-speed, steeply banked turns at low altitudes. Centrifuge studies of this illusion have examined the perception of subject orientation and/or target displacement during maintained hypergravity with the subject's head held stationary. The transient illusory perceptions produced by moving the head in hypergravity are difficult to study onboard centrifuges because the high angular velocity ensures the presence of strong Coriolis cross-coupled semicircular canal effects that mask immediate transient otolith-organ effects. The present study reports perceptions following head movements in hypergravity produced by high-speed aircraft maintaining a banked attitude with low angular velocity to minimize cross-coupled effects. Methods: Fourteen subjects flew on the NASA KC-135 and were exposed to resultant gravity forces of 1.3, 1.5, and 1.8 G for 3 minute periods. On command, seated subjects made controlled head movements in roll, pitch, and yaw at 30 second intervals both in the dark and with faint targets at a distance of 5 feet. Results: head movement produced transient perception of target displacement and velocity at levels as low as 1.3 G. Reports of target velocity without appropriate corresponding displacement were common. At 1.8 G when yaw head movements were made from a face down position, 4 subjects reported oscillatory rotational target displacement with fast and slow alternating components suggestive of torsional nystagmus. Head movements evoked symptoms of nausea in most subjects, with 2 subjects and 1 observer vomiting. Conclusions: The transient percepts present conflicting signals, which introduced confusion in target and subject orientation. Repeated head movements in hypergravity generate nausea by mechanisms distinct from cross-coupled Coriolis effects.

  17. Implication of zinc excess on soil health.

    Science.gov (United States)

    Wyszkowska, Jadwiga; Boros-Lajszner, Edyta; Borowik, Agata; Baćmaga, Małgorzata; Kucharski, Jan; Tomkiel, Monika

    2016-01-01

    This study was undertaken to evaluate zinc's influence on the resistance of organotrophic bacteria, actinomyces, fungi, dehydrogenases, catalase and urease. The experiment was conducted in a greenhouse of the University of Warmia and Mazury (UWM) in Olsztyn, Poland. Plastic pots were filled with 3 kg of sandy loam with pHKCl - 7.0 each. The experimental variables were: zinc applied to soil at six doses: 100, 300, 600, 1,200, 2,400 and 4,800 mg of Zn(2+) kg(-1) in the form of ZnCl2 (zinc chloride), and species of plant: oat (Avena sativa L.) cv. Chwat and white mustard (Sinapis alba) cv. Rota. Soil without the addition of zinc served as the control. During the growing season, soil samples were subjected to microbiological analyses on experimental days 25 and 50 to determine the abundance of organotrophic bacteria, actinomyces and fungi, and the activity of dehydrogenases, catalase and urease, which provided a basis for determining the soil resistance index (RS). The physicochemical properties of soil were determined after harvest. The results of this study indicate that excessive concentrations of zinc have an adverse impact on microbial growth and the activity of soil enzymes. The resistance of organotrophic bacteria, actinomyces, fungi, dehydrogenases, catalase and urease decreased with an increase in the degree of soil contamination with zinc. Dehydrogenases were most sensitive and urease was least sensitive to soil contamination with zinc. Zinc also exerted an adverse influence on the physicochemical properties of soil and plant development. The growth of oat and white mustard plants was almost completely inhibited in response to the highest zinc doses of 2,400 and 4,800 mg Zn(2+) kg(-1).

  18. Kinetic model of excess activated sludge thermohydrolysis.

    Science.gov (United States)

    Imbierowicz, Mirosław; Chacuk, Andrzej

    2012-11-01

    Thermal hydrolysis of excess activated sludge suspensions was carried at temperatures ranging from 423 K to 523 K and under pressure 0.2-4.0 MPa. Changes of total organic carbon (TOC) concentration in a solid and liquid phase were measured during these studies. At the temperature 423 K, after 2 h of the process, TOC concentration in the reaction mixture decreased by 15-18% of the initial value. At 473 K total organic carbon removal from activated sludge suspension increased to 30%. It was also found that the solubilisation of particulate organic matter strongly depended on the process temperature. At 423 K the transfer of TOC from solid particles into liquid phase after 1 h of the process reached 25% of the initial value, however, at the temperature of 523 K the conversion degree of 'solid' TOC attained 50% just after 15 min of the process. In the article a lumped kinetic model of the process of activated sludge thermohydrolysis has been proposed. It was assumed that during heating of the activated sludge suspension to a temperature in the range of 423-523 K two parallel reactions occurred. One, connected with thermal destruction of activated sludge particles, caused solubilisation of organic carbon and an increase of dissolved organic carbon concentration in the liquid phase (hydrolysate). The parallel reaction led to a new kind of unsolvable solid phase, which was further decomposed into gaseous products (CO(2)). The collected experimental data were used to identify unknown parameters of the model, i.e. activation energies and pre-exponential factors of elementary reactions. The mathematical model of activated sludge thermohydrolysis appropriately describes the kinetics of reactions occurring in the studied system. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Suppression of excess noise in Transition-Edge Sensors using magnetic field and geometry

    International Nuclear Information System (INIS)

    Ullom, J.N.; Doriese, W.B.; Hilton, G.C.; Beall, J.A.; Deiker, S.; Irwin, K.D.; Reintsema, C.D.; Vale, L.R.; Xu, Y.

    2004-01-01

    We report recent progress at NIST on Mo/Cu Transition-Edge Sensors (TESs). While the signal-band noise of our sensors agrees with theory, we observe excess high-frequency noise. We describe this noise and demonstrate that it can be strongly suppressed by a magnetic field perpendicular to the plane of the sensor. Both the excess noise and α=(T/R)(dR/dT) depend strongly on field so our results show that accurate comparisons between devices are only possible when the field is well known or constant. We also present results showing the noise performance of TES designs incorporating parallel and perpendicular normal metal bars, an array of normal metal islands, and in wedge-shaped devices. We demonstrate significant reduction of high-frequency noise with the perpendicular bar devices at the cost of reduced α. Both the bars and the magnetic field are useful noise reduction techniques for bolometers

  20. How many standard area diagram sets are needed for accurate disease severity assessment

    Science.gov (United States)

    Standard area diagram sets (SADs) are widely used in plant pathology: a rater estimates disease severity by comparing an unknown sample to actual severities in the SADs and interpolates an estimate as accurately as possible (although some SADs have been developed for categorizing disease too). Most ...

  1. Estimating Delays In ASIC's

    Science.gov (United States)

    Burke, Gary; Nesheiwat, Jeffrey; Su, Ling

    1994-01-01

    Verification is important aspect of process of designing application-specific integrated circuit (ASIC). Design must not only be functionally accurate, but must also maintain correct timing. IFA, Intelligent Front Annotation program, assists in verifying timing of ASIC early in design process. This program speeds design-and-verification cycle by estimating delays before layouts completed. Written in C language.

  2. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  3. Excess Weapons Plutonium Immobilization in Russia

    International Nuclear Information System (INIS)

    Jardine, L.; Borisov, G.B.

    2000-01-01

    The joint goal of the Russian work is to establish a full-scale plutonium immobilization facility at a Russian industrial site by 2005. To achieve this requires that the necessary engineering and technical basis be developed in these Russian projects and the needed Russian approvals be obtained to conduct industrial-scale immobilization of plutonium-containing materials at a Russian industrial site by the 2005 date. This meeting and future work will provide the basis for joint decisions. Supporting R and D projects are being carried out at Russian Institutes that directly support the technical needs of Russian industrial sites to immobilize plutonium-containing materials. Special R and D on plutonium materials is also being carried out to support excess weapons disposition in Russia and the US, including nonproliferation studies of plutonium recovery from immobilization forms and accelerated radiation damage studies of the US-specified plutonium ceramic for immobilizing plutonium. This intriguing and extraordinary cooperation on certain aspects of the weapons plutonium problem is now progressing well and much work with plutonium has been completed in the past two years. Because much excellent and unique scientific and engineering technical work has now been completed in Russia in many aspects of plutonium immobilization, this meeting in St. Petersburg was both timely and necessary to summarize, review, and discuss these efforts among those who performed the actual work. The results of this meeting will help the US and Russia jointly define the future direction of the Russian plutonium immobilization program, and make it an even stronger and more integrated Russian program. The two objectives for the meeting were to: (1) Bring together the Russian organizations, experts, and managers performing the work into one place for four days to review and discuss their work with each other; and (2) Publish a meeting summary and a proceedings to compile reports of all the

  4. On accurate determination of contact angle

    Science.gov (United States)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  5. Highly Accurate Prediction of Jobs Runtime Classes

    OpenAIRE

    Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi

    2016-01-01

    Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...

  6. Accurate multiplicity scaling in isotopically conjugate reactions

    International Nuclear Information System (INIS)

    Golokhvastov, A.I.

    1989-01-01

    The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs

  7. A practical method for accurate quantification of large fault trees

    International Nuclear Information System (INIS)

    Choi, Jong Soo; Cho, Nam Zin

    2007-01-01

    This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees

  8. Radiation. A buzz word for excessive fears

    International Nuclear Information System (INIS)

    Rickover, H.G.

    1980-01-01

    The necessity of accepting that risk is an inherent part of daily life and also of acquiring a sense of perspective with respect to such risks, especially with respect to radiation, is discussed. Estimations of radiation risks are examined and compared to other risk factors such as overweight and cigarette smoking. It is stated that public perception of radiation has a direct bearing on the use of nuclear power, that balancing risks and benefits must become a standard approach to evaluating environmental matters and that the present crisis in confidence over energy requires this approach. (UK)

  9. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  10. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  11. Excessive exposures of diagnostic X-ray workers in India

    International Nuclear Information System (INIS)

    Ambiger, T.Y.; Shenoy, K.S.; Patel, P.H.

    1980-01-01

    The excessive exposures (i.e. exceeding 400 mrems per fortnight) of diagnostic X-ray workers revealed under the countrywide personnel monitoring programme in India have been analysed. The analysis covers the data collected over a period of ten years during 1965-1974. The radiation workers in medical X-ray diagnostic group receiving an excess dose are found to be less than 1%. Each case of the excess dose is throughly investigated and nongenuine cases are separated and causes for genuine excessive exposures are traced. The causes and the corrective measures are enumerated. (M.G.B.)

  12. Child mortality estimation: consistency of under-five mortality rate estimates using full birth histories and summary birth histories.

    Directory of Open Access Journals (Sweden)

    Romesh Silva

    Full Text Available Given the lack of complete vital registration data in most developing countries, for many countries it is not possible to accurately estimate under-five mortality rates from vital registration systems. Heavy reliance is often placed on direct and indirect methods for analyzing data collected from birth histories to estimate under-five mortality rates. Yet few systematic comparisons of these methods have been undertaken. This paper investigates whether analysts should use both direct and indirect estimates from full birth histories, and under what circumstances indirect estimates derived from summary birth histories should be used.Usings Demographic and Health Surveys data from West Africa, East Africa, Latin America, and South/Southeast Asia, I quantify the differences between direct and indirect estimates of under-five mortality rates, analyze data quality issues, note the relative effects of these issues, and test whether these issues explain the observed differences. I find that indirect estimates are generally consistent with direct estimates, after adjustment for fertility change and birth transference, but don't add substantial additional insight beyond direct estimates. However, choice of direct or indirect method was found to be important in terms of both the adjustment for data errors and the assumptions made about fertility.Although adjusted indirect estimates are generally consistent with adjusted direct estimates, some notable inconsistencies were observed for countries that had experienced either a political or economic crisis or stalled health transition in their recent past. This result suggests that when a population has experienced a smooth mortality decline or only short periods of excess mortality, both adjusted methods perform equally well. However, the observed inconsistencies identified suggest that the indirect method is particularly prone to bias resulting from violations of its strong assumptions about recent mortality

  13. Climate change impacts on projections of excess mortality at ...

    Science.gov (United States)

    We project the change in ozone-related mortality burden attributable to changes in climate between a historical (1995-2005) and near-future (2025-2035) time period while incorporating a non-linear and synergistic effect of ozone and temperature on mortality. We simulate air quality from climate projections varying only biogenic emissions and holding anthropogenic emissions constant, thus attributing changes in ozone only to changes in climate and independent of changes in air pollutant emissions. We estimate non-linear, spatially varying, ozone-temperature risk surfaces for 94 US urban areas using observeddata. Using the risk surfaces and climate projections we estimate daily mortality attributable to ozone exceeding 40 p.p.b. (moderate level) and 75 p.p.b. (US ozone NAAQS) for each time period. The average increases in city-specific median April-October ozone and temperature between time periods are 1.02 p.p.b. and 1.94 °F; however, the results variedby region . Increases in ozone because of climate change result in an increase in ozone mortality burden. Mortality attributed to ozone exceeding 40 p.p.b. increases by 7.7% (1 .6-14.2%). Mortality attributed to ozone exceeding 75 p.p.b. increases by 14.2% (1.628.9%). The absolute increase in excess ozone mortality is larger for changes in moderate ozone levels, reflecting the larger number of days with moderate ozone levels. In this study we evaluate changes in ozone related mortality due to changes in biogenic f

  14. Preventing excessive radon exposure in U.K. housing

    International Nuclear Information System (INIS)

    Miles, J.C.H.; Cliff, K.D.; Green, B.M.R.; Dixon, D.W.

    1992-01-01

    In the United Kingdom (UK) it has been recognized for some years that some members of the population received excessive radiation exposure in their homes from radon and its decay products. To prevent such exposures, an Action Level of 400 Bq m -3 was adopted in 1987. In January, 1990, the National Radiological Protection Board (NRPB) advised that the Action Level should be reduced to 200 Bq m -3 , and this advice was accepted by the Government. It is estimated that exposures in up to 100,000 UK homes exceed this Action Level; this amounts to about 0.5% of the available housing. The UK authorities have developed a strategy for preventing such exposures: (1) Areas in which it is estimated that >1% of homes exceed the Action Level for radon are being designated as Affected Areas, and a program to map such areas is under way. Households in these areas are advised to have radon measurements made by NRPB under a open-quotes freeclose quotes (Government-funded) scheme. (2) Householders found to have whole-house, whole-year average radon concentrations >200 Bq m -3 are advised to take remedial action and are provided with information on how this can be done. Partial grants toward remedial work are available in cases of financial need. So far, around 3000 such households have been identified. (3) Within Affected Areas, localities are being defined where new homes must incorporate precautions against radon exposure. In addition to this strategy, a joint case-control study of the risks of radon in homes is being undertaken by the Imperial Cancer Research Fund and NRPB, supported by the UK Government and the Commission of the European Communities

  15. Low-mass Stars with Extreme Mid-Infrared Excesses: Potential Signatures of Planetary Collisions

    Science.gov (United States)

    Theissen, Christopher; West, Andrew

    2018-01-01

    I investigate the occurrence of extreme mid-infrared (MIR) excesses, a tracer of large amounts of dust orbiting stars, in low-mass stellar systems. Extreme MIR excesses, defined as an excess IR luminosity greater than 1% of the stellar luminosity (LIR/L* ≥ 0.01), have previously only been observed around a small number of solar-mass (M⊙) stars. The origin of this excess has been hypothesized to be massive amounts of orbiting dust, created by collisions between terrestrial planets or large planetesimals. Until recently, there was a dearth of low-mass (M* ≤ 0.6M⊙) stars exhibiting extreme MIR excesses, even though low-mass stars are ubiquitous (~70% of all stars), and known to host multiple terrestrial planets (≥ 3 planets per star).I combine the spectroscopic sample of low-mass stars from the Sloan Digital Sky Survey (SDSS) Data Release 7 (70,841 stars) with MIR photometry from the Wide-field Infrared Survey Explorer (WISE), to locate stars exhibiting extreme MIR excesses. I find the occurrence frequency of low-mass field stars (stars with ages ≥ 1 Gyr) exhibiting extreme MIR excesses is much larger than that for higher-mass field stars (0.41 ± 0.03% versus 0.00067 ± 0.00033%, respectively).In addition, I build a larger sample of low-mass stars based on stellar colors and proper motions using SDSS, WISE, and the Two-Micron All-Sky Survey (8,735,004 stars). I also build a galactic model to simulate stellar counts and kinematics to estimate the number of stars missing from my sample. I perform a larger, more complete study of low-mass stars exhibiting extreme MIR excesses, and find a lower occurrence frequency (0.020 ± 0.001%) than found in the spectroscopic sample but that is still orders of magnitude larger than that for higher-mass stars. I find a slight trend for redder stars (lower-mass stars) to exhibit a higher occurrence frequency of extreme MIR excesses, as well as a lower frequency with increased stellar age. These samples probe important

  16. Investigations of the Local supercluster velocity field. II. A study using Tolman-Bondi solution and galaxies with accurate distances from the Cepheid PL-relation

    Science.gov (United States)

    Ekholm, T.; Lanoix, P.; Teerikorpi, P.; Paturel, G.; Fouqué, P.

    1999-11-01

    A sample of 32 galaxies with accurate distance moduli from the Cepheid PL-relation (Lanoix \\cite{Lanoix99}) has been used to study the dynamical behaviour of the Local (Virgo) supercluster. We used analytical Tolman-Bondi (TB) solutions for a spherically symmetric density excess embedded in the Einstein-deSitter universe (q_0=0.5). Using 12 galaxies within Theta =30degr from the centre we found a mass estimate of 1.62M_virial for the Virgo cluster. This agrees with the finding of Teerikorpi et al. (\\cite{Teerikorpi92}) that TB-estimate may be larger than virial mass estimate from Tully & Shaya (\\cite{Tully84}). Our conclusions do not critically depend on our primary choice of the global H_0=57 km s-1 Mpc{-1} established from SNe Ia (Lanoix \\cite{Lanoix99}). The remaining galaxies outside Virgo region do not disagree with this value. Finally, we also found a TB-solution with the H_0 and q_0 cited yielding exactly one virial mass for the Virgo cluster.

  17. Comparing the Effects of Negative and Mixed Emotional Messages on Predicted Occasional Excessive Drinking

    OpenAIRE

    Carrera, Pilar; Caballero, Amparo; Mu?oz, Dolores

    2008-01-01

    In this work we present two types of emotional message, negative (sadness) versus mixed (joy and sadness), with the aim of studying their differential effect on attitude change and the probability estimated by participants of repeating the behavior of occasional excessive drinking in the near future. The results show that for the group of participants with moderate experience in this behavior the negative message, compared to the mixed one, is associated with higher probability of repeating t...

  18. Comparing the effects of negative and mixed emotional messages on predicted occasional excessive drinking

    OpenAIRE

    Carrera Levillain, Pilar; Caballero González, Amparo; Muñoz Cáceres, María Dolores

    2008-01-01

    In this work we present two types of emotional message, negative (sadness) versus mixed (joy and sadness), with the aim of studying their differential effect on attitude change and the probability estimated by participants of repeating the behavior of occasional excessive drinking in the near future. The results show that for the group of participants with moderate experience in this behavior the negative message, compared to the mixed one, is associated with higher probability of repeating t...

  19. Disposing of the world's excess plutonium

    International Nuclear Information System (INIS)

    McCormick, J.M.; Bullen, D.B.

    1998-01-01

    The authors undertake three key objectives in addressing the issue of plutonium disposition at the end of the Cold War. First, the authors estimate the total global inventory of plutonium both from weapons dismantlement and civil nuclear power reactors. Second, they review past and current policy toward handling this metal by the US, Russia, and other key countries. Third, they evaluate the feasibility of several options (but especially the vitrification and mixed oxide fuel options announced by the Clinton administration) for disposing of the increasing amounts of plutonium available today. To undertake this analysis, the authors consider both the political and scientific problems confronting policymakers in dealing with this global plutonium issue. Interview data with political and technical officials in Washington and at the International Atomic Energy Agency in Vienna, Austria, and empirical inventory data on plutonium from a variety of sources form the basis of their analysis

  20. THE NANOGRAV NINE-YEAR DATA SET: EXCESS NOISE IN MILLISECOND PULSAR ARRIVAL TIMES

    Energy Technology Data Exchange (ETDEWEB)

    Lam, M. T.; Jones, M. L.; McLaughlin, M. A.; Pennucci, T. T. [Department of Physics, West Virginia University, White Hall, Morgantown, WV 26506 (United States); Cordes, J. M.; Chatterjee, S. [Department of Astronomy and Cornell Center for Astrophysics and Planetary Science, Cornell University, Ithaca, NY 14853 (United States); Arzoumanian, Z. [Center for Research and Exploration in Space Science and Technology and X-Ray Astrophysics Laboratory, NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771 (United States); Crowter, K.; Fonseca, E.; Gonzalez, M. E. [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Demorest, P. B. [National Radio Astronomy Observatory, P.O. Box 0, Socorro, NM, 87801 (United States); Dolch, T. [Department of Physics, Hillsdale College, 33 E. College Street, Hillsdale, MI 49242 (United States); Ellis, J. A [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena CA, 91109 (United States); Ferdman, R. D. [Department of Physics, McGill University, 3600 rue Universite, Montreal, QC H3A 2T8 (Canada); Jones, G. [Department of Physics, Columbia University, 550 W. 120th Street, New York, NY 10027 (United States); Levin, L. [Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL (United Kingdom); Madison, D. R.; Ransom, S. M. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Nice, D. J. [Department of Physics, Lafayette College, Easton, PA 18042 (United States); Shannon, R. M., E-mail: michael.lam@mail.wvu.edu [CSIRO Astronomy and Space Science, Australia Telescope National Facility, Box 76, Epping NSW 1710 (Australia); and others

    2017-01-01

    Gravitational wave (GW) astronomy using a pulsar timing array requires high-quality millisecond pulsars (MSPs), correctable interstellar propagation delays, and high-precision measurements of pulse times of arrival. Here we identify noise in timing residuals that exceeds that predicted for arrival time estimation for MSPs observed by the North American Nanohertz Observatory for Gravitational Waves. We characterize the excess noise using variance and structure function analyses. We find that 26 out of 37 pulsars show inconsistencies with a white-noise-only model based on the short timescale analysis of each pulsar, and we demonstrate that the excess noise has a red power spectrum for 15 pulsars. We also decompose the excess noise into chromatic (radio-frequency-dependent) and achromatic components. Associating the achromatic red-noise component with spin noise and including additional power-spectrum-based estimates from the literature, we estimate a scaling law in terms of spin parameters (frequency and frequency derivative) and data-span length and compare it to the scaling law of Shannon and Cordes. We briefly discuss our results in terms of detection of GWs at nanohertz frequencies.

  1. Excess entropy scaling for the segmental and global dynamics of polyethylene melts.

    Science.gov (United States)

    Voyiatzis, Evangelos; Müller-Plathe, Florian; Böhm, Michael C

    2014-11-28

    The range of validity of the Rosenfeld and Dzugutov excess entropy scaling laws is analyzed for unentangled linear polyethylene chains. We consider two segmental dynamical quantities, i.e. the bond and the torsional relaxation times, and two global ones, i.e. the chain diffusion coefficient and the viscosity. The excess entropy is approximated by either a series expansion of the entropy in terms of the pair correlation function or by an equation of state for polymers developed in the context of the self associating fluid theory. For the whole range of temperatures and chain lengths considered, the two estimates of the excess entropy are linearly correlated. The scaled bond and torsional relaxation times fall into a master curve irrespective of the chain length and the employed scaling scheme. Both quantities depend non-linearly on the excess entropy. For a fixed chain length, the reduced diffusion coefficient and viscosity scale linearly with the excess entropy. An empirical reduction to a chain length-independent master curve is accessible for both dynamic quantities. The Dzugutov scheme predicts an increased value of the scaled diffusion coefficient with increasing chain length which contrasts physical expectations. The origin of this trend can be traced back to the density dependence of the scaling factors. This finding has not been observed previously for Lennard-Jones chain systems (Macromolecules, 2013, 46, 8710-8723). Thus, it limits the applicability of the Dzugutov approach to polymers. In connection with diffusion coefficients and viscosities, the Rosenfeld scaling law appears to be of higher quality than the Dzugutov approach. An empirical excess entropy scaling is also proposed which leads to a chain length-independent correlation. It is expected to be valid for polymers in the Rouse regime.

  2. Excess of {sup 236}U in the northwest Mediterranean Sea

    Energy Technology Data Exchange (ETDEWEB)

    Chamizo, E., E-mail: echamizo@us.es [Centro Nacional de Aceleradores, Universidad de Sevilla, Consejo Superior de Investigaciones Científicas, Junta de Andalucía, Thomas Alva Edison 7, 41092 Seville (Spain); López-Lora, M., E-mail: mlopezlora@us.es [Centro Nacional de Aceleradores, Universidad de Sevilla, Consejo Superior de Investigaciones Científicas, Junta de Andalucía, Thomas Alva Edison 7, 41092 Seville (Spain); Bressac, M., E-mail: matthieu.bressac@utas.edu.au [IAEA-Environment Laboratories, Monte Carlo 98000 (Monaco); Institute for Marine and Antarctic Studies, University of Tasmania, Hobart, TAS (Australia); Levy, I., E-mail: I.N.Levy@iaea.org [IAEA-Environment Laboratories, Monte Carlo 98000 (Monaco); Pham, M.K., E-mail: M.Pham@iaea.org [IAEA-Environment Laboratories, Monte Carlo 98000 (Monaco)

    2016-09-15

    In this work, we present first {sup 236}U results in the northwestern Mediterranean. {sup 236}U is studied in a seawater column sampled at DYFAMED (Dynamics of Atmospheric Fluxes in the Mediterranean Sea) station (Ligurian Sea, 43°25′N, 07°52′E). The obtained {sup 236}U/{sup 238}U atom ratios in the dissolved phase, ranging from about 2 × 10{sup −9} at 100 m depth to about 1.5 × 10{sup −9} at 2350 m depth, indicate that anthropogenic {sup 236}U dominates the whole seawater column. The corresponding deep-water column inventory (12.6 ng/m{sup 2} or 32.1 × 10{sup 12} atoms/m{sup 2}) exceeds by a factor of 2.5 the expected one for global fallout at similar latitudes (5 ng/m{sup 2} or 13 × 10{sup 12} atoms/m{sup 2}), evidencing the influence of local or regional {sup 236}U sources in the western Mediterranean basin. On the other hand, the input of {sup 236}U associated to Saharan dust outbreaks is evaluated. An additional {sup 236}U annual deposition of about 0.2 pg/m{sup 2} based on the study of atmospheric particles collected in Monaco during different Saharan dust intrusions is estimated. The obtained results in the corresponding suspended solids collected at DYFAMED station indicate that about 64% of that {sup 236}U stays in solution in seawater. Overall, this source accounts for about 0.1% of the {sup 236}U inventory excess observed at DYFAMED station. The influence of the so-called Chernobyl fallout and the radioactive effluents produced by the different nuclear installations allocated to the Mediterranean basin, might explain the inventory gap, however, further studies are necessary to come to a conclusion about its origin. - Highlights: • First {sup 236}U results in the northwest Mediterranean Sea are reported. • Anthropogenic {sup 236}U dominates the whole seawater column at DYFAMED station. • {sup 236}U deep-water column inventory exceeds by a factor of 2.5 the global fallout one. • Saharan dust intrusions are responsible for an annual

  3. Accurate and efficient calculation of response times for groundwater flow

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  4. Excess relative risk of solid cancer mortality after prolonged exposure to naturally occurring high background radiation in Yangjiang, China

    Energy Technology Data Exchange (ETDEWEB)

    Sun Quanfu; Tao Zufan [Ministry of Health, Beijing (China). Lab. of Industrial Hygiene; Akiba, Suminori (and others)

    2000-10-01

    A study was made on cancer mortality in the high-background radiation areas of Yangjiang, China. Based on hamlet-specific environmental doses and sex- and age-specific occupancy factors, cumulative doses were calculated for each subject. In this article, we describe how the indirect estimation was made on individual dose and the methodology used to estimate radiation risk. Then, assuming a linear dose response relationship and using cancer mortality data for the period 1979-1995, we estimate the excess relative risk per Sievert for solid cancer to be -0.11 (95% CI, -0.67, 0.69). Also, we estimate the excess relative risks of four leading cancers in the study areas, i.e., cancers of the liver, nasopharynx, lung and stomach. In addition, we evaluate the effects of possible bias on our risk estimation. (author)

  5. The first accurate description of an aurora

    Science.gov (United States)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  6. Accurate Charge Densities from Powder Diffraction

    DEFF Research Database (Denmark)

    Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob

    Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...

  7. Arbitrarily accurate twin composite π -pulse sequences

    Science.gov (United States)

    Torosov, Boyan T.; Vitanov, Nikolay V.

    2018-04-01

    We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .

  8. Systematization of Accurate Discrete Optimization Methods

    Directory of Open Access Journals (Sweden)

    V. A. Ovchinnikov

    2015-01-01

    Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.

  9. The effect of excessive iodine diet on thyroid function

    International Nuclear Information System (INIS)

    Wang Yuhua; Li Yaming

    2009-01-01

    The modify of the thyroid cell structure can be induced by excessive iodine diet. Then the disordered thyroid function can result in a number of thyroid disease. The radionucline thyroid imaging play an important role in diagnoses of thyroid. Amplify on the effect of excessive diet on thyroid function will be worthy instructing what preparation should do before doing the thyroid nuclide imaging. (authors)

  10. Management of excessive movable tissue: a modified impression technique.

    Science.gov (United States)

    Shum, Michael H C; Pow, Edmond H N

    2014-08-01

    Excessive movable tissue is a challenge in complete denture prosthetics. A modified impression technique is presented with polyvinyl siloxane impression material and a custom tray with relief areas and perforations in the area of the excessive movable tissue. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  11. Excess isentropic compressibility and speed of sound of the ternary

    Indian Academy of Sciences (India)

    These excess properties of the binary mixtures were fitted to Redlich-Kister equation, while the Cibulka's equation was used to fit the values related to the values to the ternary system. These excess properties have been used to discuss the presence of significant interactions between the component molecules in the binary ...

  12. 30 CFR 75.323 - Actions for excessive methane.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Actions for excessive methane. 75.323 Section... excessive methane. (a) Location of tests. Tests for methane concentrations under this section shall be made.... (1) When 1.0 percent or more methane is present in a working place or an intake air course, including...

  13. Excess molar volumes and isentropic compressibilities of binary

    Indian Academy of Sciences (India)

    Excess molar volumes (E) and deviation in isentropic compressibilities (s) have been investigated from the density and speed of sound measurements of six binary liquid mixtures containing -alkanes over the entire range of composition at 298.15 K. Excess molar volume exhibits inversion in sign in one binary ...

  14. Goodwill, Excess Returns, and Determinants of Value Creation and Overpayment

    NARCIS (Netherlands)

    Lycklama a Nijeholt, M.; Grift, Y.K.|info:eu-repo/dai/nl/073586358

    2007-01-01

    In this article we have investigated whether the determinants of excess returns (especially of target excess returns) are valid for purchased goodwill as well. Among them are acquirer’s and target’s Tobin’s q, and debt assets ratio, that explain value creation of acquisitions, and relative size,

  15. Criminal Liability of Managers for Excessive Risk-Taking?

    NARCIS (Netherlands)

    Tosza, S.T.

    2016-01-01

    The aim of the thesis was to analyse and evaluate the criminalisation of excessively risky decisions taken by managers of limited liability companies. The potentially disastrous consequences of excessive risk-taking were powerfully highlighted by the most recent financial crunch, although its

  16. Sanitization and Disposal of Excess Information Technology Equipment

    Science.gov (United States)

    2009-09-21

    Report No. D-2009-104 September 21, 2009 Sanitization and Disposal of Excess Information Technology Equipment...2009 2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Sanitization and Disposal of Excess Information Technology ...Defense (Networks and Information Integration)/DOD Chief Information Officer DRMS Defense Reutilization and Marketing Service IT Information

  17. 12 CFR 740.3 - Advertising of excess insurance.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Advertising of excess insurance. 740.3 Section... ACCURACY OF ADVERTISING AND NOTICE OF INSURED STATUS § 740.3 Advertising of excess insurance. Any advertising that mentions share or savings account insurance provided by a party other than the NCUA must...

  18. 40 CFR 57.304 - Bypass, excess emissions and malfunctions.

    Science.gov (United States)

    2010-07-01

    ... (performance level of interim constant controls) or § 57.303 (plantwide emission limitation) of this subpart... limitation, as well as the operating data, documents, and calculations used in determining the magnitude of the excess emissions; (3) Time and duration of the excess emissions; (4) Identity of the equipment...

  19. 19 CFR 10.625 - Refunds of excess customs duties.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Refunds of excess customs duties. 10.625 Section 10.625 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... and Apparel Goods § 10.625 Refunds of excess customs duties. (a) Applicability. Section 205 of the...

  20. A Practical Approach For Excess Bandwidth Distribution for EPONs

    KAUST Repository

    Elrasad, Amr

    2014-03-09

    This paper introduces a novel approach called Delayed Excess Scheduling (DES), which practically reuse the excess bandwidth in EPONs system. DES is suitable for the industrial deployment as it requires no timing constraint and achieves better performance compared to the previously reported schemes.

  1. Excess mortality in mothers of patients with polycystic ovary syndrome

    NARCIS (Netherlands)

    Louwers, Y. V.; Roest-Schalken, M. E.; Kleefstra, N.; van Lennep, J. Roeters; van den Berg, M.; Fauser, B. C. J. M.; Bilo, H. J. G.; Sijbrands, E. J. G.; Laven, J. S. E.

    STUDY QUESTION: Do diabetic parents of patients with polycystic ovary syndrome (PCOS) encounter excess mortality compared with the mortality of men and women with type 2 diabetes, recruited without selection for PCOS? SUMMARY ANSWER: Type 2 diabetes among mothers of PCOS patients results in excess

  2. 40 CFR 76.13 - Compliance and excess emissions.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Compliance and excess emissions. 76.13 Section 76.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.13 Compliance and excess emissions...

  3. Teachers' Knowledge of Anxiety and Identification of Excessive Anxiety in

    Science.gov (United States)

    Headley, Clea; Campbell, Marilyn A.

    2013-01-01

    This study examined primary school teachers' knowledge of anxiety and excessive anxiety symptoms in children. Three hundred and fifteen primary school teachers completed a questionnaire exploring their definitions of anxiety and the indications they associated with excessive anxiety in primary school children. Results showed that teachers had an…

  4. ON INFRARED EXCESSES ASSOCIATED WITH Li-RICH K GIANTS

    Energy Technology Data Exchange (ETDEWEB)

    Rebull, Luisa M. [Spitzer Science Center (SSC) and Infrared Science Archive (IRSA), Infrared Processing and Analysis Center - IPAC, 1200 E. California Blvd., California Institute of Technology, Pasadena, CA 91125 (United States); Carlberg, Joleen K. [NASA Goddard Space Flight Center, Code 667, Greenbelt, MD 20771 (United States); Gibbs, John C.; Cashen, Sarah; Datta, Ashwin; Hodgson, Emily; Lince, Megan [Glencoe High School, 2700 NW Glencoe Rd., Hillsboro, OR 97124 (United States); Deeb, J. Elin [Bear Creek High School, 9800 W. Dartmouth Pl., Lakewood, CO 80227 (United States); Larsen, Estefania; Altepeter, Shailyn; Bucksbee, Ethan; Clarke, Matthew [Millard South High School, 14905 Q St., Omaha, NE 68137 (United States); Black, David V., E-mail: rebull@ipac.caltech.edu [Walden School of Liberal Arts, 4230 N. University Ave., Provo, UT 84604 (United States)

    2015-10-15

    Infrared (IR) excesses around K-type red giants (RGs) have previously been discovered using Infrared Astronomy Satellite (IRAS) data, and past studies have suggested a link between RGs with overabundant Li and IR excesses, implying the ejection of circumstellar shells or disks. We revisit the question of IR excesses around RGs using higher spatial resolution IR data, primarily from the Wide-field Infrared Survey Explorer. Our goal was to elucidate the link between three unusual RG properties: fast rotation, enriched Li, and IR excess. Our sample of RGs includes those with previous IR detections, a sample with well-defined rotation and Li abundance measurements with no previous IR measurements, and a large sample of RGs asserted to be Li-rich in the literature; we have 316 targets thought to be K giants, about 40% of which we take to be Li-rich. In 24 cases with previous detections of IR excess at low spatial resolution, we believe that source confusion is playing a role, in that either (a) the source that is bright in the optical is not responsible for the IR flux, or (b) there is more than one source responsible for the IR flux as measured in IRAS. We looked for IR excesses in the remaining sources, identifying 28 that have significant IR excesses by ∼20 μm (with possible excesses for 2 additional sources). There appears to be an intriguing correlation in that the largest IR excesses are all in Li-rich K giants, though very few Li-rich K giants have IR excesses (large or small). These largest IR excesses also tend to be found in the fastest rotators. There is no correlation of IR excess with the carbon isotopic ratio, {sup 12}C/{sup 13}C. IR excesses by 20 μm, though relatively rare, are at least twice as common among our sample of Li-rich K giants. If dust shell production is a common by-product of Li enrichment mechanisms, these observations suggest that the IR excess stage is very short-lived, which is supported by theoretical calculations. Conversely, the

  5. Development of a new method for estimating visceral fat area with multi-frequency bioelectrical impedance

    International Nuclear Information System (INIS)

    Nagai, Masato; Komiya, Hideaki; Mori, Yutaka; Ohta, Teruo; Kasahara, Yasuhiro; Ikeda, Yoshio

    2008-01-01

    Excessive visceral fat area (VFA) is a major risk factor in such conditions as cardiovascular disease. In assessing VFA, computed tomography (CT) is adopted as the gold standard; however, this method is cost intensive and involves radiation exposure. In contrast, the bioelectrical impedance (BI) method for estimating body composition is simple and noninvasive and thus its potential application in VFA assessment is being studied. To overcome the difference in obtained impedance due to measurement conditions, we developed a more precise estimation method by selecting the optimum body posture, electrode arrangement, and frequency. The subjects were 73 healthy volunteers, 37 men and 36 women, who underwent CT scans to assess VFA and who were measured for anthropometry parameters, subcutaneous fat layer thickness, abdominal tissue area, and impedance. Impedance was measured by the tetrapolar impedance method using multi-frequency BI. Multiple regression analysis was conducted to estimate VFA. The results revealed a strong correlation between VFA observed by CT and VFA estimated by impedance (r=0.920). The regression equation accurately classified VFA≥100 cm 2 in 13 out of 14 men and 1 of 1 woman. Moreover, it classified VFA≥100 cm 2 or 2 in 3 out of 4 men and 1 of 1 woman misclassified by waist circumference (W) which was adopted as a simple index to evaluate VFA. Therefore, using this simple and convenient method for estimating VFA, we obtained an accurate assessment of VFA using the BI method. (author)

  6. Can Selforganizing Maps Accurately Predict Photometric Redshifts?

    Science.gov (United States)

    Way, Michael J.; Klose, Christian

    2012-01-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods

  7. Accurate shear measurement with faint sources

    International Nuclear Information System (INIS)

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys

  8. How Accurately can we Calculate Thermal Systems?

    International Nuclear Information System (INIS)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-01-01

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K eff , for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors

  9. Accurate control testing for clay liner permeability

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R J

    1991-08-01

    Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.

  10. Industrial excess heat for district heating in Denmark

    International Nuclear Information System (INIS)

    Bühler, Fabian; Petrović, Stefan; Karlsson, Kenneth; Elmegaard, Brian

    2017-01-01

    Highlights: •Method for utilisation potential of industrial excess heat for district heating. •Industrial excess heat from thermal processes is quantified at single production units. •Linking of industrial excess heat sources and district heating demands done in GIS. •Excess heat recovery using direct heat transfer and heat pumps. •5.1% of the Danish district heating demand could be supplied by industrial excess heat. -- Abstract: Excess heat is available from various sources and its utilisation could reduce the primary energy use. The accessibility of this heat is however dependent amongst others on the source and sink temperature, amount and potential users in its vicinity. In this work a new method is developed which analyses excess heat sources from the industrial sector and how they could be used for district heating. This method first allocates excess heat to single production units by introducing and validating a new approach. Spatial analysis of the heat sources and consumers are then performed to evaluate the potential for using them for district heating. In this way the theoretical potential of using the excess heat for covering the heating demand of buildings is determined. Through the use of industry specific temperature profiles the heat usable directly or via heat pumps is further found. A sensitivity analysis investigates the impact of future energy efficiency measures in the industry, buildings and the district heating grid on the national potential. The results show that for the case study of Denmark, 1.36 TWh of district heat could be provided annually with industrial excess heat from thermal processes which equals 5.1% of the current demand. More than half of this heat was found to be usable directly, without the need for a heat pump.

  11. Energetics and dynamics of excess electrons in simple fluids

    International Nuclear Information System (INIS)

    Space, B.

    1992-01-01

    Excess electronic dynamical and equilibrium properties are modeled in both polarizable and nonpolarizable noble gas fluids. Explicit dynamical calculations are carried out for excess electrons in fluid helium, where excess electronic eigenstates are localized. Energetics and dynamics are considered for fluids which span the entire range of polarizability present in the rare gases. Excess electronic eigenstates and eigenvalues are calculated for fluids of helium, argon and xenon. Both equilibrium and dynamical information is obtained from the calculation of these wavefunctions. A surface hopping trajectory method for studying nonadiabatic excess electronic relaxation in condensed systems is used to explore the nonadiabatic relaxation after photoexciting an equilibrated excess electron in dense fluid helium. The different types on nonadiabatic phenomena which are important in excess electronic relaxation are surveyed. The same surface hopping trajectory method is also used to study the rapid nonadiabatic relaxation after an excess electron is injected into unperturbed fluid helium. Several distinctively different relaxation processes, characterized by their relative importance at different times during the relaxation to a localized equilibrium state, are detailed. Though the dynamical properties of excess electrons under the conditions considered here have never been studied before, the behavior is remarkably similar to that observed in both experimental and theoretical studies of electron hydration dynamics, indicating that the processes described may be very general relaxation mechanisms for localization and trapping in fluids. Additionally, ground state energies of an excess electron, e 0 , are computed as a function of solvent density using model electron-atom pseudopotentials in fluid helium, argon, and xenon. The nonuniqueness of the pseudopotential description of electron-molecule interactions is demonstrated

  12. Mechanisms linking excess adiposity and carcinogenesis promotion

    Directory of Open Access Journals (Sweden)

    Ana I. Pérez-Hernández

    2014-05-01

    Full Text Available Obesity constitutes one of the most important metabolic diseases being associated to insulin resistance development and increased cardiovascular risk. Association between obesity and cancer has also been well-established for several tumor types, such as breast cancer in postmenopausal women, colorectal and prostate cancer. Cancer is the first death cause in developed countries and the second one in developing countries, with high incidence rates around the world. Furthermore, it has been estimated that 15-20% of all cancer deaths may be attributable to obesity. Tumor growth is regulated by interactions between tumor cells and their tissue microenvironment. In this sense, obesity may lead to cancer development through dysfunctional adipose tissue and altered signaling pathways. In this review, three main pathways relating obesity and cancer development are examined: i inflammatory changes leading to macrophage polarization and altered adipokine profile; ii insulin resistance development; and iii adipose tissue hypoxia. Since obesity and cancer present a high prevalence, the association between these conditions is of great public health significance and studies showing mechanisms by which obesity lead to cancer development and progression are needed to improve prevention and management of these diseases.

  13. Cardioprotective aspirin users and their excess risk of upper gastrointestinal complications.

    Science.gov (United States)

    Hernández-Díaz, Sonia; García Rodríguez, Luis A

    2006-09-20

    To balance the cardiovascular benefits from low-dose aspirin against the gastrointestinal harm caused, studies have considered the coronary heart disease risk for each individual but not their gastrointestinal risk profile. We characterized the gastrointestinal risk profile of low-dose aspirin users in real clinical practice, and estimated the excess risk of upper gastrointestinal complications attributable to aspirin among patients with different gastrointestinal risk profiles. To characterize aspirin users in terms of major gastrointestinal risk factors (i.e., advanced age, male sex, prior ulcer history and use of non-steroidal anti-inflammatory drugs), we used The General Practice Research Database in the United Kingdom and the Base de Datos para la Investigación Farmacoepidemiológica en Atención Primaria in Spain. To estimate the baseline risk of upper gastrointestinal complications according to major gastrointestinal risk factors and the excess risk attributable to aspirin within levels of these factors, we used previously published meta-analyses on both absolute and relative risks of upper gastrointestinal complications. Over 60% of aspirin users are above 60 years of age, 4 to 6% have a recent history of peptic ulcers and over 13% use other non-steroidal anti-inflammatory drugs. The estimated average excess risk of upper gastrointestinal complications attributable to aspirin is around 5 extra cases per 1,000 aspirin users per year. However, the excess risk varies in parallel to the underlying gastrointestinal risk and might be above 10 extra cases per 1,000 person-years in over 10% of aspirin users. In addition to the cardiovascular risk, the underlying gastrointestinal risk factors have to be considered when balancing harms and benefits of aspirin use for an individual patient. The gastrointestinal harms may offset the cardiovascular benefits in certain groups of patients where the gastrointestinal risk is high and the cardiovascular risk is low.

  14. Excess mortality in Italy: Should we care about low influenza vaccine uptake?

    Science.gov (United States)

    Fausto, Francia; Paolo, Pandolfi; Anna, Odone; Carlo, Signorelli

    2018-03-01

    The aims of this study were to explore 2015 mortality data further and to assess excess deaths' determinants. We analysed data from a large metropolitan area in the north of Italy, the city of Bologna. We took advantage of a comprehensive local-level database and merged three different data sources to analitically explore reported 2014-2015 excess mortality and its determinants. Effect estimates were derived from multivariable Poisson regression analysis, according to vaccination status and frailty index. We report 9.8% excess mortality in 2015 compared to 2014, with seasonal and age distribution patterns in line with national figures. All-cause mortality in the elderly population is 36% higher (risk ratio [RR]=1.36, 95% confidence interval [CI] 1.27-1.45) in subjects not vaccinated against seasonal flu compared to vaccinated subjects, with risk of death for influenza or pneumonia being 43% higher (RR=1.43, 95% CI 1.02-2.00) in unvaccinated subjects. Reported excess mortality's determinants in Italy should be further explored. Elderly subjects not vaccinated against the flu appear to have increased risk of all-cause and cause-specific mortality compared to vaccinated subjects after accounting for possible confounders. Our findings raise awareness of the need to promote immunisation against the flu among elder populations and offer insights to plan and implement effective public-health interventions.

  15. Neutron excess generation by fusion neutron source for self-consistency of nuclear energy system

    International Nuclear Information System (INIS)

    Saito, Masaki; Artisyuk, V.; Chmelev, A.

    1999-01-01

    The present day fission energy technology faces with the problem of transmutation of dangerous radionuclides that requires neutron excess generation. Nuclear energy system based on fission reactors needs fuel breeding and, therefore, suffers from lack of neutron excess to apply large-scale transmutation option including elimination of fission products. Fusion neutron source (FNS) was proposed to improve neutron balance in the nuclear energy system. Energy associated with the performance of FNS should be small enough to keep the position of neutron excess generator, thus, leaving the role of dominant energy producers to fission reactors. The present paper deals with development of general methodology to estimate the effect of neutron excess generation by FNS on the performance of nuclear energy system as a whole. Multiplication of fusion neutrons in both non-fissionable and fissionable multipliers was considered. Based on the present methodology it was concluded that neutron self-consistency with respect to fuel breeding and transmutation of fission products can be attained with small fraction of energy associated with innovated fusion facilities. (author)

  16. A Structural Equation Model on Korean Adolescents' Excessive Use of Smartphones.

    Science.gov (United States)

    Lee, Hana; Kim, JooHyun

    2018-03-31

    We develop a unified structural model that defines multi-relationships between systematic factors causing excessive use of smartphones and the corresponding results. We conducted a survey with adolescents who live in Seoul, Pusan, Gangneung, Donghae, and Samcheok from Feb. to Mar. 2016. We utilized SPSS Ver. 22 and Amos Ver. 22 to analyze the survey result at a 0.05 significance level. To investigate demographic characteristics of the participants and their variations, we employed descriptive analysis. We adopted the maximum likelihood estimate method to verify the fitness of the hypothetical model and the hypotheses therein. We used χ 2 statistics, GFI, AGFI, CFI, NFI, IFI, RMR, and RMSEA to verify the fitness of our structural model. (1) Our proposed structural model demonstrated a fine fitness level. (2) Our proposed structural model could describe the excessive use of a smartphone with 88.6% accuracy. (3) The absence of the family function and relationship between friends, impulsiveness, and low self-esteem were confirmed as key factors that cause excessive use of smartphones. (4) Further, impulsiveness and low self-esteem are closely related to the absence of family functions and relations between friends by 68.3% and 54.4%, respectively. We suggest that nursing intervention programs from various angles are required to reduce adolescents' excessive use of smartphones. For example, family communication programs would be helpful for both parents and children. Consultant programs about friend relationship also meaningful for the program. Copyright © 2018. Published by Elsevier B.V.

  17. IPHAS A-TYPE STARS WITH MID-INFRARED EXCESSES IN SPITZER SURVEYS

    International Nuclear Information System (INIS)

    Hales, Antonio S.; Barlow, Michael J.; Drew, Janet E.; Unruh, Yvonne C.; Greimel, Robert; Irwin, Michael J.; Gonzalez-Solares, Eduardo

    2009-01-01

    We have identified 17 A-type stars in the Galactic Plane that have mid-infrared (mid-IR) excesses at 8 μm. From observed colors in the (r' - Hα) - (r' - i') plane, we first identified 23,050 early A-type main-sequence (MS) star candidates in the Isaac Newton Photometric H-Alpha Survey (IPHAS) point source database that are located in Spitzer Galactic Legacy Mid-Plane Survey Extraordinaire Galactic plane fields. Imposing the requirement that they be detected in all seven Two Micron All Sky Survey and Infrared Astronomical Satellite bands led to a sample of 2692 candidate A-type stars with fully sampled 0.6 to 8 μm spectral energy distributions (SEDs). Optical classification spectra of 18 of the IPHAS candidate A-type MS stars showed that all but one could be well fitted using MS A-type templates, with the other being an A-type supergiant. Out of the 2692 A-type candidates 17 (0.6%) were found to have 8 μm excesses above the expected photospheric values. Taking into account non-A-Type contamination estimates, the 8 μm excess fraction is adjusted to ∼0.7%. The distances to these sources range from 0.7 to 2.5 kpc. Only 10 out of the 17 excess stars had been covered by Spitzer MIPSGAL survey fields, of which five had detectable excesses at 24 μm. For sources with excesses detected in at least two mid-IR wavelength bands, blackbody fits to the excess SEDs yielded temperatures ranging from 270 to 650 K, and bolometric luminosity ratios L IR /L * from 2.2 x 10 -3 - 1.9 x 10 -2 , with a mean value of 7.9 x 10 -3 (these bolometric luminosities are lower limits as cold dust is not detectable by this survey). Both the presence of mid-IR excesses and the derived bolometric luminosity ratios are consistent with many of these systems being in the planet-building transition phase between the early protoplanetary disk phase and the later debris disk phase.

  18. Multistage feature extraction for accurate face alignment

    NARCIS (Netherlands)

    Zuo, F.; With, de P.H.N.

    2004-01-01

    We propose a novel multistage facial feature extraction approach using a combination of 'global' and 'local' techniques. At the first stage, we use template matching, based on an Edge-Orientation-Map for fast feature position estimation. Using this result, a statistical framework applying the Active

  19. ASTRAL, DRAGON and SEDAN scores predict stroke outcome more accurately than physicians.

    Science.gov (United States)

    Ntaios, G; Gioulekas, F; Papavasileiou, V; Strbian, D; Michel, P

    2016-11-01

    ASTRAL, SEDAN and DRAGON scores are three well-validated scores for stroke outcome prediction. Whether these scores predict stroke outcome more accurately compared with physicians interested in stroke was investigated. Physicians interested in stroke were invited to an online anonymous survey to provide outcome estimates in randomly allocated structured scenarios of recent real-life stroke patients. Their estimates were compared to scores' predictions in the same scenarios. An estimate was considered accurate if it was within 95% confidence intervals of actual outcome. In all, 244 participants from 32 different countries responded assessing 720 real scenarios and 2636 outcomes. The majority of physicians' estimates were inaccurate (1422/2636, 53.9%). 400 (56.8%) of physicians' estimates about the percentage probability of 3-month modified Rankin score (mRS) > 2 were accurate compared with 609 (86.5%) of ASTRAL score estimates (P DRAGON score estimates (P DRAGON score estimates (P DRAGON and SEDAN scores predict outcome of acute ischaemic stroke patients with higher accuracy compared to physicians interested in stroke. © 2016 EAN.

  20. Molar excess volumes of liquid hydrogen and neon mixtures from path integral simulation

    International Nuclear Information System (INIS)

    Challa, S.R.; Johnson, J.K.

    1999-01-01

    Volumetric properties of liquid mixtures of neon and hydrogen have been calculated using path integral hybrid Monte Carlo simulations. Realistic potentials have been used for the three interactions involved. Molar volumes and excess volumes of these mixtures have been evaluated for various compositions at 29 and 31.14 K, and 30 atm. Significant quantum effects are observed in molar volumes. Quantum simulations agree well with experimental molar volumes. Calculated excess volumes agree qualitatively with experimental values. However, contrary to the existing understanding that large positive deviations from ideal mixtures are caused due to quantum effects in Ne - H 2 mixtures, both classical as well as quantum simulations predict the large positive deviations from ideal mixtures. Further investigations using two other Ne - H 2 potentials of Lennard - Jones (LJ) type show that excess volumes are very sensitive to the cross-interaction potential. We conclude that the cross-interaction potential employed in our simulations is accurate for volumetric properties. This potential is more repulsive compared to the two LJ potentials tested, which have been obtained by two different combining rules. This repulsion and a comparatively lower potential well depth can explain the positive deviations from ideal mixing. copyright 1999 American Institute of Physics

  1. Application of revised procedure on determining large excess reactivity of operating reactor. Fuel addition method

    International Nuclear Information System (INIS)

    Nagao, Yoshiharu

    2002-01-01

    The fuel addition method or the neutron absorption substitution method have been used for determination of large excess multiplication factor of large sized reactors. It has been pointed out, however, that all the experimental methods are possibly not free from the substantially large systematic error up to 20%, when the value of the excess multiplication factor exceeds about 15%Δk. Then, a basic idea of a revised procedure was proposed to cope with the problem, which converts the increase of multiplication factor in an actual core to that in a virtual core by calculation, because its value is in principle defined not for the former but the latter core. This paper proves that the revised procedure is able to be applicable for large sized research and test reactors through the theoretical analyses on the measurements undertaken at the JMTRC and JMTR cores. The values of excess multiplication factor are accurately determined utilizing the whole core calculation by the Monte Carlo code MCNP4A. (author)

  2. The e-index, complementing the h-index for excess citations.

    Directory of Open Access Journals (Sweden)

    Chun-Ting Zhang

    Full Text Available BACKGROUND: The h-index has already been used by major citation databases to evaluate the academic performance of individual scientists. Although effective and simple, the h-index suffers from some drawbacks that limit its use in accurately and fairly comparing the scientific output of different researchers. These drawbacks include information loss and low resolution: the former refers to the fact that in addition to h(2 citations for papers in the h-core, excess citations are completely ignored, whereas the latter means that it is common for a group of researchers to have an identical h-index. METHODOLOGY/PRINCIPAL FINDINGS: To solve these problems, I here propose the e-index, where e(2 represents the ignored excess citations, in addition to the h(2 citations for h-core papers. Citation information can be completely depicted by using the h-index together with the e-index, which are independent of each other. Some other h-type indices, such as a and R, are h-dependent, have information redundancy with h, and therefore, when used together with h, mask the real differences in excess citations of different researchers. CONCLUSIONS/SIGNIFICANCE: Although simple, the e-index is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index.

  3. Diagnostic accuracy of the defining characteristics of the excessive fluid volume diagnosis in hemodialysis patients

    Directory of Open Access Journals (Sweden)

    Maria Isabel da Conceição Dias Fernandes

    2015-12-01

    Full Text Available Objective: to evaluate the accuracy of the defining characteristics of the excess fluid volume nursing diagnosis of NANDA International, in patients undergoing hemodialysis. Method: this was a study of diagnostic accuracy, with a cross-sectional design, performed in two stages. The first, involving 100 patients from a dialysis clinic and a university hospital in northeastern Brazil, investigated the presence and absence of the defining characteristics of excess fluid volume. In the second step, these characteristics were evaluated by diagnostic nurses, who judged the presence or absence of the diagnosis. To analyze the measures of accuracy, sensitivity, specificity, and positive and negative predictive values were calculated. Approval was given by the Research Ethics Committee under authorization No. 148.428. Results: the most sensitive indicator was edema and most specific were pulmonary congestion, adventitious breath sounds and restlessness. Conclusion: the more accurate defining characteristics, considered valid for the diagnostic inference of excess fluid volume in patients undergoing hemodialysis were edema, pulmonary congestion, adventitious breath sounds and restlessness. Thus, in the presence of these, the nurse may safely assume the presence of the diagnosis studied.

  4. Prevalence of excessive screen time and associated factors in adolescents

    Directory of Open Access Journals (Sweden)

    Joana Marcela Sales de Lucena

    2015-12-01

    Full Text Available Objective: To determine the prevalence of excessive screen time and to analyze associated factors among adolescents. Methods: This was a cross-sectional school-based epidemiological study with 2874 high school adolescents with age 14-19 years (57.8% female from public and private schools in the city of João Pessoa, PB, Northeast Brazil. Excessive screen time was defined as watching television and playing video games or using the computer for more than 2 h/day. The associated factors analyzed were: sociodemographic (gender, age, economic class, and skin color, physical activity and nutritional status of adolescents. Results: The prevalence of excessive screen time was 79.5% (95%CI 78.1-81.1 and it was higher in males (84.3% compared to females (76.1%; p<0.001. In multivariate analysis, adolescent males, those aged 14-15 year old and the highest economic class had higher chances of exposure to excessive screen time. The level of physical activity and nutritional status of adolescents were not associated with excessive screen time. Conclusions: The prevalence of excessive screen time was high and varied according to sociodemographic characteristics of adolescents. It is necessary to develop interventions to reduce the excessive screen time among adolescents, particularly in subgroups with higher exposure.

  5. Accurate metacognition for visual sensory memory representations.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  6. An accurate nonlinear Monte Carlo collision operator

    International Nuclear Information System (INIS)

    Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.

    1995-03-01

    A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)

  7. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  8. Accurate Modeling Method for Cu Interconnect

    Science.gov (United States)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  9. How accurately can 21cm tomography constrain cosmology?

    Science.gov (United States)

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  10. Funnel metadynamics as accurate binding free-energy method

    Science.gov (United States)

    Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele

    2013-01-01

    A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839

  11. AMID: Accurate Magnetic Indoor Localization Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Namkyoung Lee

    2018-05-01

    Full Text Available Geomagnetic-based indoor positioning has drawn a great attention from academia and industry due to its advantage of being operable without infrastructure support and its reliable signal characteristics. However, it must overcome the problems of ambiguity that originate with the nature of geomagnetic data. Most studies manage this problem by incorporating particle filters along with inertial sensors. However, they cannot yield reliable positioning results because the inertial sensors in smartphones cannot precisely predict the movement of users. There have been attempts to recognize the magnetic sequence pattern, but these attempts are proven only in a one-dimensional space, because magnetic intensity fluctuates severely with even a slight change of locations. This paper proposes accurate magnetic indoor localization using deep learning (AMID, an indoor positioning system that recognizes magnetic sequence patterns using a deep neural network. Features are extracted from magnetic sequences, and then the deep neural network is used for classifying the sequences by patterns that are generated by nearby magnetic landmarks. Locations are estimated by detecting the landmarks. AMID manifested the proposed features and deep learning as an outstanding classifier, revealing the potential of accurate magnetic positioning with smartphone sensors alone. The landmark detection accuracy was over 80% in a two-dimensional environment.

  12. Fast and accurate determination of modularity and its effect size

    International Nuclear Information System (INIS)

    Treviño, Santiago III; Nyberg, Amy; Bassler, Kevin E; Del Genio, Charo I

    2015-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erdős–Rényi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a z-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links. (paper)

  13. A Modified Proportional Navigation Guidance for Accurate Target Hitting

    Directory of Open Access Journals (Sweden)

    A. Moharampour

    2010-03-01

    First, the pure proportional navigation guidance (PPNG in 3-dimensional state is explained in a new point of view. The main idea is based on the distinction between angular rate vector and rotation vector conceptions. The current innovation is based on selection of line of sight (LOS coordinates. A comparison between two available choices for LOS coordinates system is proposed. An improvement is made by adding two additional terms. First term includes a cross range compensator which is used to provide and enhance path observability, and obtain convergent estimates of state variables. The second term is new concept lead bias term, which has been calculated by assuming an equivalent acceleration along the target longitudinal axis. Simulation results indicate that the lead bias term properly provides terminal conditions for accurate target interception.

  14. Land application for disposal of excess water: an overview

    International Nuclear Information System (INIS)

    Riley, G.H.

    1992-01-01

    Water management is an important factor in the operation of uranium mines in the Alligator Rivers Region, located in the Wet-Dry tropics. For many project designs, especially open cut operations, sole reliance on evaporative disposal of waste water is ill-advised in years where the Wet season is above average. Instead, spray irrigation, or the application of excess water to suitable areas of land, has been practised at both Nabarlek and Ranger. The method depends on water losses by evaporation from spray droplets, from vegetation surfaces and from the ground surface; residual water is carried to the groundwater system by percolation. The solutes are largely transferred to the soils where heavy metals and metallic radionuclides attach to particles in the soil profile with varying efficiency depending on soil type. Major solutes that can occur in waste water from uranium mines are less successfully immobilised in soil. Sulphate is essentially conservative and not bound within the soil profile; ammonia is affected by soil reactions leading to its decomposition. The retrospective viewpoint of history indicates the application of a technology inadequately researched for local conditions. The consequences at Nabarlek have been the death of trees on one application area and the creation of contaminated groundwater which has moved into the biosphere down gradient and affected the ecology of a local stream. At Ranger, the outcome of land application has been less severe in the short term but the effective adsorption of radionuclides in surface soils has lead to dose estimates which will necessitate restrictions on future public access unless extensive rehabilitation is carried out. 2 refs., 1 tab

  15. Timing of Excessive Weight Gain During Pregnancy Modulates Newborn Anthropometry.

    Science.gov (United States)

    Ruchat, Stephanie-May; Allard, Catherine; Doyon, Myriam; Lacroix, Marilyn; Guillemette, Laetitia; Patenaude, Julie; Battista, Marie-Claude; Ardilouze, Jean-Luc; Perron, Patrice; Bouchard, Luigi; Hivert, Marie-France

    2016-02-01

    Excessive gestational weight gain (GWG) is associated with increased birth weight and neonatal adiposity. However, timing of excessive GWG may have a differential impact on birth outcomes. The objective of this study was to compare the effect of early and mid/late excessive GWG on newborn anthropometry in the context of the Canadian clinical recommendations that are specific for first trimester and for second/third trimesters based on maternal pre-pregnancy BMI. We included 607 glucose-tolerant women in our main analyses, after excluding women who had less than the recommended total GWG. Maternal body weight was measured in early pregnancy, mid-pregnancy, and late pregnancy. Maternal and fetal clinical outcomes were collected, including newborn anthropometry. Women were divided into four groups according to the Canadian guidelines for GWG in the first and in the second/third trimesters: (1) "overall non-excessive" (reference group); (2) "early excessive GWG"; (3) "mid/late excessive GWG"; and (4) "overall excessive GWG." Differences in newborn anthropometry were tested across GWG categories. Women had a mean (±SD) pre-pregnancy BMI of 24.7 ± 5.2 kg/m(2) and total GWG of 15.3 ± 4.4 kg. Women with mid/late excessive GWG gave birth to heavier babies (gestational age-adjusted birth weight z-score 0.33 ± 0.91) compared with women in the reference group (0.00 ± 0.77, P = 0.007), whereas women with early excessive GWG gave birth to babies of similar weight (gestational age-adjusted z-score 0.01 ± 0.86) to the reference group (0.00 ± 0.77, P = 0.84). When we stratified our analyses and investigated women who gained within the recommendations for total GWG, mid/late excessive GWG specifically was associated with greater newborn size, similar to our main analyses. Excessive GWG in mid/late pregnancy in women who did not gain weight excessively in early pregnancy is associated with increased birth size, even in those who gained within the Canadian recommendations

  16. Thermodynamic properties of binary mixtures of tetrahydropyran with pyridine and isomeric picolines: Excess molar volumes, excess molar enthalpies and excess isentropic compressibilities

    International Nuclear Information System (INIS)

    Saini, Neeti; Jangra, Sunil K.; Yadav, J.S.; Sharma, Dimple; Sharma, V.K.

    2011-01-01

    Research highlights: → Densities, ρ and speeds of sound, u of tetrahydropyran (i) + pyridine or α-, β- or γ-picoline (j) binary mixtures at 298.15, 303.15 and 308.15 K and excess molar enthalpies, H E of the same set of mixtures at 308.15 K have been measured as a function of composition. → The observed densities and speeds of sound values have been employed to determine excess molar volumes, V E and excess isentropic compressibilities, κ S E . → Topology of the constituents of mixtures has been utilized (Graph theory) successfully to predict V E , H E and κ S E data of the investigated mixtures. → Thermodynamic data of the various mixtures have also been analyzed in terms of Prigogine-Flory-Patterson (PFP) theory. - Abstract: Densities, ρ and speeds of sound, u of tetrahydropyran (i) + pyridine or α-, β- or γ- picoline (j) binary mixtures at 298.15, 303.15 and 308.15 K and excess molar enthalpies, H E of the same set of mixtures at 308.15 K have been measured as a function of composition using an anton Parr vibrating-tube digital density and sound analyzer (model DSA 5000) and 2-drop micro-calorimeter, respectively. The resulting density and speed of sound data of the investigated mixtures have been utilized to predict excess molar volumes, V E and excess isentropic compressibilities, κ S E . The observed data have been analyzed in terms of (i) Graph theory; (ii) Prigogine-Flory-Patterson theory. It has been observed that V E , H E and κ S E data predicted by Graph theory compare well with their experimental values.

  17. Accurate thermodynamic characterization of a synthetic coal mine methane mixture

    International Nuclear Information System (INIS)

    Hernández-Gómez, R.; Tuma, D.; Villamañán, M.A.; Mondéjar, M.E.; Chamorro, C.R.

    2014-01-01

    Highlights: • Accurate density data of a 10 components synthetic coal mine methane mixture are presented. • Experimental data are compared with the densities calculated from the GERG-2008 equation of state. • Relative deviations in density were within a 0.2% band at temperatures above 275 K. • Densities at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations. -- Abstract: In the last few years, coal mine methane (CMM) has gained significance as a potential non-conventional gas fuel. The progressive depletion of common fossil fuels reserves and, on the other hand, the positive estimates of CMM resources as a by-product of mining promote this fuel gas as a promising alternative fuel. The increasing importance of its exploitation makes it necessary to check the capability of the present-day models and equations of state for natural gas to predict the thermophysical properties of gases with a considerably different composition, like CMM. In this work, accurate density measurements of a synthetic CMM mixture are reported in the temperature range from (250 to 400) K and pressures up to 15 MPa, as part of the research project EMRP ENG01 of the European Metrology Research Program for the characterization of non-conventional energy gases. Experimental data were compared with the densities calculated with the GERG-2008 equation of state. Relative deviations between experimental and estimated densities were within a 0.2% band at temperatures above 275 K, while data at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations

  18. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  19. Alterations of the Lipid Metabolome in Dairy Cows Experiencing Excessive Lipolysis Early Postpartum.

    Science.gov (United States)

    Humer, Elke; Khol-Parisini, Annabella; Metzler-Zebeli, Barbara U; Gruber, Leonhard; Zebeli, Qendrim

    2016-01-01

    A decrease in insulin sensitivity enhances adipose tissue lipolysis helping early lactation cows counteracting their energy deficit. However, excessive lipolysis poses serious health risks for cows, and its underlying mechanisms are not clearly understood. The present study used targeted ESI-LC-MS/MS-based metabolomics and indirect insulin sensitivity measurements to evaluate metabolic alterations in the serum of dairy cows of various parities experiencing variable lipolysis early postpartum. Thirty (12 primiparous and 18 multiparous) cows of Holstein Friesian and Simmental breeds, fed the same diet and kept under the same management conditions, were sampled at d 21 postpartum and classified as low (n = 10), medium (n = 8), and high (n = 12) lipolysis groups, based on serum concentration of nonesterified fatty acids. Overall, excessive lipolysis in the high group came along with impaired estimated insulin sensitivity and characteristic shifts in acylcarnitine, sphingomyelin, phosphatidylcholine and lysophospholipid metabolome profiles compared to the low group. From the detected phosphatidylcholines mainly those with diacyl-residues showed differences among lipolysis groups. Furthermore, more than half of the detected sphingomyelins were increased in cows experiencing high lipomobilization. Additionally, strong differences in serum acylcarnitines were noticed among lipolysis groups. The study suggests an altered serum phospholipidome in dairy cows associated with an increase in certain long-chain sphingomyelins and the progression of disturbed insulin function. In conclusion, the present study revealed 37 key metabolites as part of alterations in the synthesis or breakdown of sphingolipids and phospholipids associated with lowered estimated insulin sensitivity and excessive lipolysis in early-lactating cows.

  20. Determination of the excess noise of avalanche photodiodes integrated in 0.35-μm CMOS technologies

    Science.gov (United States)

    Jukić, Tomislav; Brandl, Paul; Zimmermann, Horst

    2018-04-01

    The excess noise of avalanche photodiodes (APDs) integrated in a high-voltage (HV) CMOS process and in a pin-photodiode CMOS process, both with 0.35-μm structure sizes, is described. A precise excess noise measurement technique is applied using a laser source, a spectrum analyzer, a voltage source, a current meter, a cheap transimpedance amplifier, and a personal computer with a MATLAB program. In addition, usage for on-wafer measurements is demonstrated. The measurement technique is verified with a low excess noise APD as a reference device with known ratio k = 0.01 of the impact ionization coefficients. The k-factor of an APD developed in HV CMOS is determined more accurately than known before. In addition, it is shown that the excess noise of the pin-photodiode CMOS APD depends on the optical power for avalanche gains above 35 and that modulation doping can suppress this power dependence. Modulation doping, however, increases the excess noise.

  1. 26 CFR 1.6655-7 - Addition to tax on account of excessive adjustment under section 6425.

    Science.gov (United States)

    2010-04-01

    ..., DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Additions to the Tax, Additional... section 6655(a) for failure to pay estimated income tax, the excessive adjustment under section 6425 is... section 6425. (a) Section 6655(h) imposes an addition to the tax under chapter 1 of the Internal Revenue...

  2. Evaluation of effective dose and excess lifetime cancer risk from ...

    African Journals Online (AJOL)

    Evaluation of effective dose and excess lifetime cancer risk from indoor and outdoor gamma dose rate of university of Port Harcourt Teaching Hospital, Rivers State. ... Therefore, the management of University of Port Harcourt teaching hospital ...

  3. Implications of /sup 36/A excess on Venus

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, M [Tokyo Univ. (Japan). Inst. of Space and Aeronautical Science

    1979-05-01

    The finding of /sup 36/A excess on Venus by the mass-spectroscopic measurement of the Venus Pioneer appears to endorse the more rapid accretion theory of Venus than the Earth and the secondary origin of the terrestrial atmosphere.

  4. Gene Linked to Excess Male Hormones in Female Infertility Disorder

    Science.gov (United States)

    ... April 15, 2014 Gene linked to excess male hormones in female infertility disorder Discovery by NIH-supported ... may lead to the overproduction of androgens — male hormones similar to testosterone — occurring in women with polycystic ...

  5. Excess molar volumes and isentropic compressibilities of binary ...

    Indian Academy of Sciences (India)

    Excess molar volume; binary liquid mixtures; isentropic compressibility; intermolecular interactions. ... mixtures are essential for fluid flow, mass flow and heat transfer processes in chemical ... Experimentally determined values of density(ρ).

  6. Characteristics of adolescent excessive drinkers compared with consumers and abstainers

    NARCIS (Netherlands)

    Tomcikova, Zuzana; Geckova, Andrea Madarasova; van Dijk, Jitse P.; Reijneveld, Sijmen A.

    Introduction and Aims. This study aimed at comparing adolescent abstainers, consumers and excessive drinkers in terms of family characteristics (structure of family, socioeconomic factors), perceived social support, personality characteristics (extraversion, self-esteem, aggression) and well-being.

  7. Industrial excess heat for district heating in Denmark

    DEFF Research Database (Denmark)

    Bühler, Fabian; Petrovic, Stefan; Karlsson, Kenneth Bernard

    2017-01-01

    analyses excess heat sources from the industrial sector and how they could be used for district heating. This method first allocates excess heat to single production units by introducing and validating a new approach. Spatial analysis of the heat sources and consumers are then performed to evaluate...... the potential for using them for district heating. In this way the theoretical potential of using the excess heat for covering the heating demand of buildings is determined. Through the use of industry specific temperature profiles the heat usable directly or via heat pumps is further found. A sensitivity...... analysis investigates the impact of future energy efficiency measures in the industry, buildings and the district heating grid on the national potential. The results show that for the case study of Denmark, 1.36 TWh of district heat could be provided annually with industrial excess heat from thermal...

  8. Targets to treat androgen excess in polycystic ovary syndrome.

    Science.gov (United States)

    Luque-Ramírez, Manuel; Escobar-Morreale, Héctor Francisco

    2015-01-01

    The polycystic ovary syndrome (PCOS) is a common androgen disorder in reproductive-aged women. Excessive biosynthesis and secretion of androgens by steroidogenic tissues is its central pathogenetic mechanism. The authors review the potential targets and new drugs to treat androgen excess in PCOS. Besides our lab's experience, a systematic search (MEDLINE, Cochrane library, ClinicalTriasl.gov, EU Clinical Trials Register and hand-searching) regarding observational studies, randomized clinical trials, systematic reviews, meta-analyses and patents about this topic was performed. PCOS has a heterogeneous clinical presentation. It is unlikely that a single drug would cover all its possible manifestations. Available treatments for androgen excess are not free of side effects that are of particular concern in these women who suffer from cardiometabolic risk even without treatment. A precise characterization of the source of androgen excess must tailor antiandrogenic management in each woman, avoiding undesirable side effects.

  9. Modelling of excess noise attnuation by grass and forest | Onuu ...

    African Journals Online (AJOL)

    , guinea grass (panicum maximum) and forest which comprises iroko (milicia ezcelea) and white afara (terminalia superba) trees in the ratio of 2:1 approximately. Excess noise attenuation spectra have been plotted for the grass and forest for ...

  10. Iodine deficiency and iodine excess in Jiangsu Province, China

    NARCIS (Netherlands)

    Zhao, J.

    2001-01-01

    Keywords:
    iodine deficiency, iodine excess, endemic goiter, drinking water, iodine intake, thyroid function, thyroid size, iodized salt, iodized oil, IQ, physical development, hearing capacity, epidemiology, meta-analysis, IDD, randomized trial, intervention, USA, Bangladesh,

  11. Effect of temporal dependence and seasonality on return level estimates of excessive rainfall

    CSIR Research Space (South Africa)

    Khuluse, S

    2009-08-01

    Full Text Available insight into the mechanism of risk events is advancement towards gaining this understanding. Methodologies provided by Extreme Value Theory (EVT) can make a contribution in understanding the mechanistic behaviour of extreme events. In this paper...

  12. Effect of temporal dependence and seasonality on return level estimates of excessive rainfall

    CSIR Research Space (South Africa)

    Khuluse, S

    2009-08-01

    Full Text Available and Actuarial Science August 20, 2009 EVT The rationale for modelling extreme values Overview of Extreme Value Theory Application The End! Outline 1 The rationale for modelling extreme values 2 Overview of Extreme Value Theory 3 Application 4 The End... Roles of Risk and Decision Analyses in Decision Support. Decision Analysis, 3(4):220-232. International Council for Science Regional Office for Africa (2007). Science Plan on Natural and Human-Induced Hazards and Disasters in sub-Saharan Africa. IPCC...

  13. Excess electrons in methanol clusters: Beyond the one-electron picture

    Science.gov (United States)

    Pohl, Gábor; Mones, Letif; Turi, László

    2016-10-01

    We performed a series of comparative quantum chemical calculations on various size negatively charged methanol clusters, ("separators=" CH 3 OH ) n - . The clusters are examined in their optimized geometries (n = 2-4), and in geometries taken from mixed quantum-classical molecular dynamics simulations at finite temperature (n = 2-128). These latter structures model potential electron binding sites in methanol clusters and in bulk methanol. In particular, we compute the vertical detachment energy (VDE) of an excess electron from increasing size methanol cluster anions using quantum chemical computations at various levels of theory including a one-electron pseudopotential model, several density functional theory (DFT) based methods, MP2 and coupled-cluster CCSD(T) calculations. The results suggest that at least four methanol molecules are needed to bind an excess electron on a hydrogen bonded methanol chain in a dipole bound state. Larger methanol clusters are able to form stronger interactions with an excess electron. The two simulated excess electron binding motifs in methanol clusters, interior and surface states, correlate well with distinct, experimentally found VDE tendencies with size. Interior states in a solvent cavity are stabilized significantly stronger than electron states on cluster surfaces. Although we find that all the examined quantum chemistry methods more or less overestimate the strength of the experimental excess electron stabilization, MP2, LC-BLYP, and BHandHLYP methods with diffuse basis sets provide a significantly better estimate of the VDE than traditional DFT methods (BLYP, B3LYP, X3LYP, PBE0). A comparison to the better performing many electron methods indicates that the examined one-electron pseudopotential can be reasonably used in simulations for systems of larger size.

  14. Excess Volumes and Excess Isentropic Compressibilities of Binary Liquid Mixtures of Trichloroethylene with Esters at 303.15 K

    Science.gov (United States)

    Ramanaiah, S.; Rao, C. Narasimha; Nagaraja, P.; Venkateswarlu, P.

    2015-11-01

    Exces volumes, VE, and excess isentropic compressibilities, κSE, have been reported as a function of composition for binary liquid mixtures of trichloroethylene with ethyl acetate, n-propyl acetate, and n-butyl acetate at 303.15 K. Isentropic compressibilities are calculated using measured sound speeds and density data for pure components and for binary mixtures. Excess volumes and excess isentropic compressibilities are found to be negative for the three systems studied over the entire composition range at 303.15 K, whereas these values become more negative with an increase of carbon chain length. The results are discussed in terms of intermolecular interactions between unlike molecules.

  15. State dependent pseudo-resonances and excess noise

    OpenAIRE

    Papoff, F.; D'Alessandro, G.; Oppo, G.Luca

    2008-01-01

    We show that strong response to nonresonant modulations and excess noise are state dependent in generic nonlinear systems; i.e., they affect some output states but are absent from others. This is demonstrated in complex Swift-Hohenberg models relevant to optics, where it is caused by the non-normality of the linearized stability operators around selected output states, even though the cavity modes are orthogonal. In particular, we find the effective parameters that control excess noise and th...

  16. Variables of excessive computer internet use in childhood and adolescence

    OpenAIRE

    Thalemann, Ralf

    2010-01-01

    The aim of this doctoral thesis is the characterization of excessive computer and video gaming in terms of a behavioral addiction. Therefore, the development of a diagnostic psychometric instrument was central to differentiate between normal and pathological computer gaming in adolescence. In study 1, 323 children were asked about their video game playing behavior to assess the prevalence of pathological computer gaming. Data suggest that excessive computer and video game players use thei...

  17. Excessive computer game playing : evidence for addiction and aggression?

    OpenAIRE

    Grüsser, SM; Thalemann, R; Griffiths, MD

    2007-01-01

    Computer games have become an ever-increasing part of many adolescents’ day-to-day lives. Coupled with this phenomenon, reports of excessive gaming (computer game playing) denominated as “computer/video game addiction” have been discussed in the popular press as well as in recent scientific research. The aim of the present study was the investigation of the addictive potential of gaming as well as the relationship between excessive gaming and aggressive attitudes and behavior. A sample compri...

  18. A Portable Burn Pan for the Disposal of Excess Propellants

    Science.gov (United States)

    2016-06-01

    2013 - 06/01/2016 A Portable Burn Pan for the Disposal of Excess Propellants Michael Walsh USA CRREL USA CRREL 72 Lyme Road Hanover, NH 03755...Army Alaska XRF X-Ray Florescence vii ACKNOWLEDGEMENTS Project ER-201323, A Portable Burn Pan for the Disposal of Gun Propellants, was a very...contamination problem while allowing troops to train as they fight, we have developed a portable training device for burning excess gun propellants. 1.1

  19. Asymmetric Dark Matter Models and the LHC Diphoton Excess

    DEFF Research Database (Denmark)

    Frandsen, Mads T.; Shoemaker, Ian M.

    2016-01-01

    The existence of dark matter (DM) and the origin of the baryon asymmetry are persistent indications that the SM is incomplete. More recently, the ATLAS and CMS experiments have observed an excess of diphoton events with invariant mass of about 750 GeV. One interpretation of this excess is decays...... have for models of asymmetric DM that attempt to account for the similarity of the dark and visible matter abundances....

  20. Limiting excessive postoperative blood transfusion after cardiac procedures. A review.

    OpenAIRE

    Ferraris, V A; Ferraris, S P

    1995-01-01

    Analysis of blood product use after cardiac operations reveals that a few patients ( 80%). The risk factors that predispose a minority of patients to excessive blood use include patient-related factors, transfusion practices, drug-related causes, and procedure-related factors. Multivariate studies suggest that patient age and red blood cell volume are independent patient-related variables that predict excessive blood product transfusion aft...

  1. Accurate measurements of neutron activation cross sections

    International Nuclear Information System (INIS)

    Semkova, V.

    1999-01-01

    The applications of some recent achievements of neutron activation method on high intensity neutron sources are considered from the view point of associated errors of cross sections data for neutron induced reaction. The important corrections in -y-spectrometry insuring precise determination of the induced radioactivity, methods for accurate determination of the energy and flux density of neutrons, produced by different sources, and investigations of deuterium beam composition are considered as factors determining the precision of the experimental data. The influence of the ion beam composition on the mean energy of neutrons has been investigated by measurement of the energy of neutrons induced by different magnetically analysed deuterium ion groups. Zr/Nb method for experimental determination of the neutron energy in the 13-15 MeV energy range allows to measure energy of neutrons from D-T reaction with uncertainty of 50 keV. Flux density spectra from D(d,n) E d = 9.53 MeV and Be(d,n) E d = 9.72 MeV are measured by PHRS and foil activation method. Future applications of the activation method on NG-12 are discussed. (author)

  2. Implicit time accurate simulation of unsteady flow

    Science.gov (United States)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  3. Spectrally accurate initial data in numerical relativity

    Science.gov (United States)

    Battista, Nicholas A.

    Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.

  4. A stiffly accurate integrator for elastodynamic problems

    KAUST Repository

    Michels, Dominik L.

    2017-07-21

    We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.

  5. Geodetic analysis of disputed accurate qibla direction

    Science.gov (United States)

    Saksono, Tono; Fulazzaky, Mohamad Ali; Sari, Zamah

    2018-04-01

    Muslims perform the prayers facing towards the correct qibla direction would be the only one of the practical issues in linking theoretical studies with practice. The concept of facing towards the Kaaba in Mecca during the prayers has long been the source of controversy among the muslim communities to not only in poor and developing countries but also in developed countries. The aims of this study were to analyse the geodetic azimuths of qibla calculated using three different models of the Earth. The use of ellipsoidal model of the Earth could be the best method for determining the accurate direction of Kaaba from anywhere on the Earth's surface. A muslim cannot direct himself towards the qibla correctly if he cannot see the Kaaba due to setting out process and certain motions during the prayer this can significantly shift the qibla direction from the actual position of the Kaaba. The requirement of muslim prayed facing towards the Kaaba is more as spiritual prerequisite rather than physical evidence.

  6. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  7. Generalized estimating equations

    CERN Document Server

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  8. Excess pore water pressure induced in the foundation of a tailings dyke at Muskeg River Mine, Fort McMurray

    Energy Technology Data Exchange (ETDEWEB)

    Eshraghian, A.; Martens, S. [Klohn Crippen Berger Ltd., Calgary, AB (Canada)

    2010-07-01

    This paper discussed the effect of staged construction on the generation and dissipation of excess pore water pressure within the foundation clayey units of the External Tailings Facility dyke. Data were compiled from piezometers installed within the dyke foundation and used to estimate the dissipation parameters for the clayey units for a selected area of the foundation. Spatial and temporal variations in the pore water pressure generation parameters were explained. Understanding the process by which excess pore water pressure is generated and dissipates is critical to optimizing dyke design and performance. Piezometric data was shown to be useful in improving estimates of the construction-induced pore water pressure and dissipation rates within the clay layers in the foundation during dyke construction. In staged construction, a controlled rate of load application is used to increase foundation stability. Excess pore water pressure dissipates after each application, so the most critical stability condition happens after each load. Slow loading allows dissipation, whereas previous load pressure remains during fast loading. The dyke design must account for the rate of loading and the rate of pore pressure dissipation. Controlling the rate of loading and the rate of stress-induced excess pore water pressure generation is important to dyke stability during construction. Effective stress-strength parameters for the foundation require predictions of the pore water pressure induced during staged construction. It was found that both direct and indirect loading generates excess pore water pressure in the foundation clays. 2 refs., 2 tabs., 11 figs.

  9. Quantifying recycled moisture fraction in precipitation of an arid region using deuterium excess

    Directory of Open Access Journals (Sweden)

    Yanlong Kong

    2013-01-01

    Full Text Available Terrestrial moisture recycling by evapotranspiration has recently been recognised as an important source of precipitation that can be characterised by its isotopic composition. Up to now, this isotope technique has mainly been applied to moisture recycling in some humid regions, including Brazil, Great Lakes in North America and the European Alps. In arid and semi-arid regions, the contribution of transpiration by plants to local moisture recycling can be small, so that evaporation by bare soil and surface water bodies dominates. Recognising that the deuterium excess (d-excess of evaporated moisture is significantly different from that of the original water, we made an attempt to use this isotopic parameter for estimating moisture recycling in the semi-arid region of Eastern Tianshan, China. We measured the d-excess of samples taken from individual precipitation events during a hydrological year from 2003 to 2004 at two Tianshan mountain stations, and we used long-term monthly average values of the d-excess for the station Urumqi, which are available from the International Atomic Energy Agency–World Meteorological Organization (IAEA–WMO Global Network of Isotopes in Precipitation (GNIP. Since apart from recycling of moisture from the ground, sub-cloud evaporation of falling raindrops also affects the d-excess of precipitation, the measured values had to be corrected for this evaporation effect. For the selected stations, the sub-cloud evaporation was found to change between 0.1 and 3.8%, and the d-excess decreased linearly with increasing sub-cloud evaporation at about 1.1‰ per 1% change of sub-cloud evaporation. Assuming simple mixing between advected and recycled moisture, the recycled fraction in precipitation has been estimated to be less than 2.0±0.6% for the Tianshan mountain stations and reach values up to 15.0±0.7% in the Urumqi region. The article includes a discussion of these findings in the context of water cycling in the

  10. Lifetime excess cancer risk due to carcinogens in food and beverages: Urban versus rural differences in Canada.

    Science.gov (United States)

    Cheasley, Roslyn; Keller, C Peter; Setton, Eleanor

    2017-09-14

    To explore differences in urban versus rural lifetime excess risk of cancer from five specific contaminants found in food and beverages. Probable contaminant intake is estimated using Monte Carlo simulations of contaminant concentrations in combination with dietary patterns. Contaminant concentrations for arsenic, benzene, lead, polychlorinated biphenyls (PCBs) and tetrachloroethylene (PERC) were derived from government dietary studies. The dietary patterns of 34 944 Canadians from 10 provinces were available from Health Canada's Canadian Community Health Survey, Cycle 2.2, Nutrition (2004). Associated lifetime excess cancer risk (LECR) was subsequently calculated from the results of the simulations. In the calculation of LECR from food and beverages for the five selected substances, two (lead and PERC) were shown to have excess risk below 10 per million; whereas for the remaining three (arsenic, benzene and PCBs), it was shown that at least 50% of the population were above 10 per million excess cancers. Arsenic residues, ingested via rice and rice cereal, registered the greatest disparity between urban and rural intake, with LECR per million levels well above 1000 per million at the upper bound. The majority of PCBs ingestion comes from meat, with values slightly higher for urban populations and LECR per million estimates between 50 and 400. Drinking water is the primary contributor of benzene intake in both urban and rural populations, with LECR per million estimates of 35 extra cancers in the top 1% of sampled population. Overall, there are few disparities between urban and rural lifetime excess cancer risk from contaminants found in food and beverages. Estimates could be improved with more complete Canadian dietary intake and concentration data in support of detailed exposure assessments in estimating LECR.

  11. Accurate Recovery of H i Velocity Dispersion from Radio Interferometers

    Energy Technology Data Exchange (ETDEWEB)

    Ianjamasimanana, R. [Max-Planck Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Blok, W. J. G. de [Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, 7990 AA Dwingeloo (Netherlands); Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV, Groningen (Netherlands)

    2017-05-01

    Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by The H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.

  12. Excess Molar Volumes and Excess Molar Enthalpies in Binary Systems N-alkyl-triethylammonium bis(trifluoromethylsulfonyl)imide + Methanol

    Czech Academy of Sciences Publication Activity Database

    Machanová, Karolina; Troncoso, J.; Jacquemin, J.; Bendová, Magdalena

    2014-01-01

    Roč. 363, FEB 15 (2014), s. 156-166 ISSN 0378-3812 Institutional support: RVO:67985858 Keywords : ionic liquids * excess properties * binary mixtures Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.200, year: 2014

  13. Dietary patterns are associated with excess weight and abdominal obesity in a cohort of young Brazilian adults.

    Science.gov (United States)

    Machado Arruda, Soraia Pinheiro; da Silva, Antônio Augusto Moura; Kac, Gilberto; Vilela, Ana Amélia Freitas; Goldani, Marcelo; Bettiol, Heloisa; Barbieri, Marco Antônio

    2016-09-01

    The objective of the present study was to investigate whether dietary patterns are associated with excess weight and abdominal obesity among young adults (23-25 years). A cross-sectional study was conducted on 2061 participants of a birth cohort from Ribeirão Preto, Brazil, started in 1978-1979. Twenty-seven subjects with caloric intake outside ±3 standard deviation range were excluded, leaving 2034 individuals. Excess weight was defined as body mass index (BMI ≥ 25 kg/m(2)), abdominal obesity as waist circumference (WC > 80 cm for women; >90 cm for men) and waist/hip ratio (WHR > 0.85 for women; >0.90 for men). Poisson regression with robust variance adjustment was used to estimate the prevalence ratio (PR) adjusted for socio-demographic and lifestyle variables. Four dietary patterns were identified by principal component analysis: healthy, traditional Brazilian, bar and energy dense. In the adjusted analysis, the bar pattern was associated with a higher prevalence of excess weight (PR 1.46; 95 % CI 1.23-1.73) and abdominal obesity based on WHR (PR 2.19; 95 % CI 1.59-3.01). The energy-dense pattern was associated with a lower prevalence of excess weight (PR 0.73; 95 % CI 0.61-0.88). Men with greater adherence to the traditional Brazilian pattern showed a lower prevalence of excess weight (PR 0.65; 95 % CI 0.51-0.82), but no association was found for women. There was no association between the healthy pattern and excess weight/abdominal obesity. In this sample, the bar pattern was associated with higher prevalences of excess weight and abdominal obesity, while the energy-dense (for both genders) and traditional Brazilian (only for men) patterns were associated with lower prevalences of excess weight.

  14. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  15. How flatbed scanners upset accurate film dosimetry

    Science.gov (United States)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  16. How flatbed scanners upset accurate film dosimetry

    International Nuclear Information System (INIS)

    Van Battum, L J; Verdaasdonk, R M; Heukelom, S; Huizenga, H

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2–2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red–green–blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. (paper)

  17. How Dusty Is Alpha Centauri? Excess or Non-excess over the Infrared Photospheres of Main-sequence Stars

    Science.gov (United States)

    Wiegert, J.; Liseau, R.; Thebault, P.; Olofsson, G.; Mora, A.; Bryden, G.; Marshall, J. P.; Eiroa, C.; Montesinos, B.; Ardila, D.; hide

    2014-01-01

    Context. Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby, solar-type binary Centauri have metallicities that are higher than solar, which is thought to promote giant planet formation. Aims. We aim to determine the level of emission from debris around the stars in the Cen system. This requires knowledge of their photospheres.Having already detected the temperature minimum, Tmin, of CenA at far-infrared wavelengths, we here attempt to do the same for the moreactive companion Cen B. Using the Cen stars as templates, we study the possible eects that Tmin may have on the detectability of unresolveddust discs around other stars. Methods.We used Herschel-PACS, Herschel-SPIRE, and APEX-LABOCA photometry to determine the stellar spectral energy distributions in thefar infrared and submillimetre. In addition, we used APEX-SHeFI observations for spectral line mapping to study the complex background around Cen seen in the photometric images. Models of stellar atmospheres and of particulate discs, based on particle simulations and in conjunctionwith radiative transfer calculations, were used to estimate the amount of debris around these stars. Results. For solar-type stars more distant than Cen, a fractional dust luminosity fd LdustLstar 2 107 could account for SEDs that do not exhibit the Tmin eect. This is comparable to estimates of fd for the Edgeworth-Kuiper belt of the solar system. In contrast to the far infrared,slight excesses at the 2:5 level are observed at 24 m for both CenA and B, which, if interpreted as due to zodiacal-type dust emission, wouldcorrespond to fd (13) 105, i.e. some 102 times that of the local zodiacal cloud. Assuming simple power-law size distributions of the dustgrains, dynamical disc modelling leads to rough mass estimates of the putative Zodi belts around the Cen stars, viz.4106 M$ of 4 to 1000 msize grains, distributed according to n(a) a3:5. Similarly, for filled-in Tmin

  18. EXCESSIVE INTERNET USE AND PSYCHOPATHOLOGY: THE ROLE OF COPING

    Directory of Open Access Journals (Sweden)

    Daria J. Kuss

    2017-02-01

    Full Text Available Objective: In 2013, the American Psychiatric Association included Internet Gaming Disorder in the diagnostic manual as a condition which requires further research, indicating the scientific and clinical community are aware of potential health concerns as a consequence of excessive Internet use. From a clinical point of view, it appears that excessive/addictive Internet use is often comorbid with further psychopathologies and assessing comorbidity is relevant in clinical practice, treatment outcome and prevention as the probability to become addicted to using the Internet accelerates with additional (subclinical symptoms. Moreover, research indicates individuals play computer games excessively to cope with everyday stressors and to regulate their emotions by applying media-focused coping strategies, suggesting pathological computer game players play in order to relieve stress and to avoid daily hassles. The aims of this research were to replicate and extend previous findings and explanations of the complexities of the relationships between excessive Internet use and Internet addiction, psychopathology and dysfunctional coping strategies. Method: Participants included 681 Polish university students sampled using an online battery of validated psychometric instruments. Results: Results of structural equation models revealed dysfunctional coping strategies (i.e., distraction, denial, self-blame, substance use, venting, media use, and behavioural disengagement significantly predict excessive Internet use, and the data fit the theoretical model well. A second SEM showed media-focused coping and substance use coping significantly mediate the relationship between psychopathology (operationalised via the Global Severity Index and excessive Internet use. Conclusions: The findings lend support to the self-medication hypothesis of addictive disorders, and suggest psychopathology and dysfunctional coping have additive effects on excessive Internet use.

  19. Calculating excess lifetime risk in relative risk models

    International Nuclear Information System (INIS)

    Vaeth, M.; Pierce, D.A.

    1990-01-01

    When assessing the impact of radiation exposure it is common practice to present the final conclusions in terms of excess lifetime cancer risk in a population exposed to a given dose. The present investigation is mainly a methodological study focusing on some of the major issues and uncertainties involved in calculating such excess lifetime risks and related risk projection methods. The age-constant relative risk model used in the recent analyses of the cancer mortality that was observed in the follow-up of the cohort of A-bomb survivors in Hiroshima and Nagasaki is used to describe the effect of the exposure on the cancer mortality. In this type of model the excess relative risk is constant in age-at-risk, but depends on the age-at-exposure. Calculation of excess lifetime risks usually requires rather complicated life-table computations. In this paper we propose a simple approximation to the excess lifetime risk; the validity of the approximation for low levels of exposure is justified empirically as well as theoretically. This approximation provides important guidance in understanding the influence of the various factors involved in risk projections. Among the further topics considered are the influence of a latent period, the additional problems involved in calculations of site-specific excess lifetime cancer risks, the consequences of a leveling off or a plateau in the excess relative risk, and the uncertainties involved in transferring results from one population to another. The main part of this study relates to the situation with a single, instantaneous exposure, but a brief discussion is also given of the problem with a continuous exposure at a low-dose rate

  20. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....