Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan
2015-08-11
Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.
Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin
2018-05-01
To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
Directory of Open Access Journals (Sweden)
Tweya Hannock
2012-07-01
Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among
Accurate estimation of indoor travel times
DEFF Research Database (Denmark)
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan
2014-01-01
The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets
Zadoff-Chu coded ultrasonic signal for accurate range estimation
AlSharif, Mohammed H.
2017-11-02
This paper presents a new adaptation of Zadoff-Chu sequences for the purpose of range estimation and movement tracking. The proposed method uses Zadoff-Chu sequences utilizing a wideband ultrasonic signal to estimate the range between two devices with very high accuracy and high update rate. This range estimation method is based on time of flight (TOF) estimation using cyclic cross correlation. The system was experimentally evaluated under different noise levels and multi-user interference scenarios. For a single user, the results show less than 7 mm error for 90% of range estimates in a typical indoor environment. Under the interference from three other users, the 90% error was less than 25 mm. The system provides high estimation update rate allowing accurate tracking of objects moving with high speed.
Zadoff-Chu coded ultrasonic signal for accurate range estimation
AlSharif, Mohammed H.; Saad, Mohamed; Siala, Mohamed; Ballal, Tarig; Boujemaa, Hatem; Al-Naffouri, Tareq Y.
2017-01-01
This paper presents a new adaptation of Zadoff-Chu sequences for the purpose of range estimation and movement tracking. The proposed method uses Zadoff-Chu sequences utilizing a wideband ultrasonic signal to estimate the range between two devices with very high accuracy and high update rate. This range estimation method is based on time of flight (TOF) estimation using cyclic cross correlation. The system was experimentally evaluated under different noise levels and multi-user interference scenarios. For a single user, the results show less than 7 mm error for 90% of range estimates in a typical indoor environment. Under the interference from three other users, the 90% error was less than 25 mm. The system provides high estimation update rate allowing accurate tracking of objects moving with high speed.
Accurate hydrocarbon estimates attained with radioactive isotope
International Nuclear Information System (INIS)
Hubbard, G.
1983-01-01
To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample
ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.
Accurate position estimation methods based on electrical impedance tomography measurements
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Fishing site mapping using local knowledge provides accurate and ...
African Journals Online (AJOL)
Accurate fishing ground maps are necessary for fisheries monitoring. In Velondriake locally managed marine area (LMMA) we observed that the nomenclature of shared fishing sites (FS) is villages dependent. Additionally, the level of illiteracy makes data collection more complicated, leading to data collectors improvising ...
An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
Do detour tasks provide accurate assays of inhibitory control?
Whiteside, Mark A.; Laker, Philippa R.; Beardsworth, Christine E.
2018-01-01
Transparent Cylinder and Barrier tasks are used to purportedly assess inhibitory control in a variety of animals. However, we suspect that performances on these detour tasks are influenced by non-cognitive traits, which may result in inaccurate assays of inhibitory control. We therefore reared pheasants under standardized conditions and presented each bird with two sets of similar tasks commonly used to measure inhibitory control. We recorded the number of times subjects incorrectly attempted to access a reward through transparent barriers, and their latencies to solve each task. Such measures are commonly used to infer the differential expression of inhibitory control. We found little evidence that their performances were consistent across the two different Putative Inhibitory Control Tasks (PICTs). Improvements in performance across trials showed that pheasants learned the affordances of each specific task. Critically, prior experience of transparent tasks, either Barrier or Cylinder, also improved subsequent inhibitory control performance on a novel task, suggesting that they also learned the general properties of transparent obstacles. Individual measures of persistence, assayed in a third task, were positively related to their frequency of incorrect attempts to solve the transparent inhibitory control tasks. Neophobia, Sex and Body Condition had no influence on individual performance. Contrary to previous studies of primates, pheasants with poor performance on PICTs had a wider dietary breadth assayed using a free-choice task. Our results demonstrate that in systems or taxa where prior experience and differences in development cannot be accounted for, individual differences in performance on commonly used detour-dependent PICTS may reveal more about an individual's prior experience of transparent objects, or their motivation to acquire food, than providing a reliable measure of their inhibitory control. PMID:29593115
Toward accurate and precise estimates of lion density.
Elliot, Nicholas B; Gopalaswamy, Arjun M
2017-08-01
Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and
Accurate location estimation of moving object In Wireless Sensor network
Directory of Open Access Journals (Sweden)
Vinay Bhaskar Semwal
2011-12-01
Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.
DEFF Research Database (Denmark)
Diwan, Vaibhav; Albrechtsen, Hans-Jørgen; Smets, Barth F.
2018-01-01
amplicon sequencing and from guild targeted approaches. The universal amplicon sequencing provided 1) accurate estimates of nitrifier composition, 2) clustering of the samples based on these compositions consistent with sample origin, 3) estimates of the relative abundance of the guilds correlated...
The description of a method for accurately estimating creatinine clearance in acute kidney injury.
Mellas, John
2016-05-01
Acute kidney injury (AKI) is a common and serious condition encountered in hospitalized patients. The severity of kidney injury is defined by the RIFLE, AKIN, and KDIGO criteria which attempt to establish the degree of renal impairment. The KDIGO guidelines state that the creatinine clearance should be measured whenever possible in AKI and that the serum creatinine concentration and creatinine clearance remain the best clinical indicators of renal function. Neither the RIFLE, AKIN, nor KDIGO criteria estimate actual creatinine clearance. Furthermore there are no accepted methods for accurately estimating creatinine clearance (K) in AKI. The present study describes a unique method for estimating K in AKI using urine creatinine excretion over an established time interval (E), an estimate of creatinine production over the same time interval (P), and the estimated static glomerular filtration rate (sGFR), at time zero, utilizing the CKD-EPI formula. Using these variables estimated creatinine clearance (Ke)=E/P * sGFR. The method was tested for validity using simulated patients where actual creatinine clearance (Ka) was compared to Ke in several patients, both male and female, and of various ages, body weights, and degrees of renal impairment. These measurements were made at several serum creatinine concentrations in an attempt to determine the accuracy of this method in the non-steady state. In addition E/P and Ke was calculated in hospitalized patients, with AKI, and seen in nephrology consultation by the author. In these patients the accuracy of the method was determined by looking at the following metrics; E/P>1, E/P1 and 0.907 (0.841, 0.973) for 0.95 ml/min accurately predicted the ability to terminate renal replacement therapy in AKI. Include the need to measure urine volume accurately. Furthermore the precision of the method requires accurate estimates of sGFR, while a reasonable measure of P is crucial to estimating Ke. The present study provides the
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John
2016-01-01
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.
Gupta, Puneet; Bhowmick, Brojeshwar; Pal, Arpan
2017-07-01
Camera-equipped devices are ubiquitous and proliferating in the day-to-day life. Accurate heart rate (HR) estimation from the face videos acquired from the low cost cameras in a non-contact manner, can be used in many real-world scenarios and hence, require rigorous exploration. This paper has presented an accurate and near real-time HR estimation system using these face videos. It is based on the phenomenon that the color and motion variations in the face video are closely related to the heart beat. The variations also contain the noise due to facial expressions, respiration, eye blinking and environmental factors which are handled by the proposed system. Neither Eulerian nor Lagrangian temporal signals can provide accurate HR in all the cases. The cases where Eulerian temporal signals perform spuriously are determined using a novel poorness measure and then both the Eulerian and Lagrangian temporal signals are employed for better HR estimation. Such a fusion is referred as serial fusion. Experimental results reveal that the error introduced in the proposed algorithm is 1.8±3.6 which is significantly lower than the existing well known systems.
International Nuclear Information System (INIS)
Song, N; Frey, E C; He, B; Wahl, R L
2011-01-01
Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a
An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation
Directory of Open Access Journals (Sweden)
Senthil Kumar Murugesan
2015-01-01
Full Text Available Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.
Hird, Sarah; Kubatko, Laura; Carstens, Bryan
2010-11-01
We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.
Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods
International Nuclear Information System (INIS)
Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris
2016-01-01
Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.
Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model
Ahlgren, K.; Li, X.
2017-12-01
Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model
A new geometric-based model to accurately estimate arm and leg inertial estimates.
Wicke, Jason; Dumas, Geneviève A
2014-06-03
Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.
Energy Technology Data Exchange (ETDEWEB)
Hall, V.
2004-02-01
The use of customer energy information and its importance in building business-to-business and business-to-consumer demographic profiles, and the role of certified meter data management agents, i.e. companies that have created infrastructures to manage large volumes of energy data that can be used to drive marketing to energy customers, is discussed. Short and long-term load management planning, distribution planning, outage management and demand response programs, efforts to streamline billing and create revenue-generating value-added services, are just some of the areas that can benefit from comprehensively collected and accurate consumer data. The article emphasizes the process of certification, the benefits certified meter data management companies can provide to utilities as well as to consumers, their role in disaster recovery management, and characteristics of the way such companies bring the benefits of their operations to their client utilities and consumers. 1 tab.
SpotCaliper: fast wavelet-based spot detection with accurate size estimation.
Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael
2016-04-15
SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Inamori, Takaya; Sako, Nobutada; Nakasuka, Shinichi
2011-06-01
Nano-satellites provide space access to broader range of satellite developers and attract interests as an application of the space developments. These days several new nano-satellite missions are proposed with sophisticated objectives such as remote-sensing and observation of astronomical objects. In these advanced missions, some nano-satellites must meet strict attitude requirements for obtaining scientific data or images. For LEO nano-satellite, a magnetic attitude disturbance dominates over other environmental disturbances as a result of small moment of inertia, and this effect should be cancelled for a precise attitude control. This research focuses on how to cancel the magnetic disturbance in orbit. This paper presents a unique method to estimate and compensate the residual magnetic moment, which interacts with the geomagnetic field and causes the magnetic disturbance. An extended Kalman filter is used to estimate the magnetic disturbance. For more practical considerations of the magnetic disturbance compensation, this method has been examined in the PRISM (Pico-satellite for Remote-sensing and Innovative Space Missions). This method will be also used for a nano-astrometry satellite mission. This paper concludes that use of the magnetic disturbance estimation and compensation are useful for nano-satellites missions which require a high accurate attitude control.
Accurate estimation of the RMS emittance from single current amplifier data
International Nuclear Information System (INIS)
Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.
2002-01-01
This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H - ion source
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.
2017-01-01
velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.
Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome
Teschendorff, Andrew E.; Enver, Tariq
2017-01-01
The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes. PMID:28569836
An Accurate Estimate of the Free Energy and Phase Diagram of All-DNA Bulk Fluids
Directory of Open Access Journals (Sweden)
Emanuele Locatelli
2018-04-01
Full Text Available We present a numerical study in which large-scale bulk simulations of self-assembled DNA constructs have been carried out with a realistic coarse-grained model. The investigation aims at obtaining a precise, albeit numerically demanding, estimate of the free energy for such systems. We then, in turn, use these accurate results to validate a recently proposed theoretical approach that builds on a liquid-state theory, the Wertheim theory, to compute the phase diagram of all-DNA fluids. This hybrid theoretical/numerical approach, based on the lowest-order virial expansion and on a nearest-neighbor DNA model, can provide, in an undemanding way, a parameter-free thermodynamic description of DNA associating fluids that is in semi-quantitative agreement with experiments. We show that the predictions of the scheme are as accurate as those obtained with more sophisticated methods. We also demonstrate the flexibility of the approach by incorporating non-trivial additional contributions that go beyond the nearest-neighbor model to compute the DNA hybridization free energy.
Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll
2013-01-01
In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.
Saccà, Alessandro
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.
Directory of Open Access Journals (Sweden)
Alessandro Saccà
Full Text Available Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.
Dougher, Carly E; Rifkin, Dena E; Anderson, Cheryl AM; Smits, Gerard; Persky, Martha S; Block, Geoffrey A; Ix, Joachim H
2016-01-01
Background: Sodium intake influences blood pressure and proteinuria, yet the impact on long-term outcomes is uncertain in chronic kidney disease (CKD). Accurate assessment is essential for clinical and public policy recommendations, but few large-scale studies use 24-h urine collections. Recent studies that used spot urine sodium and associated estimating equations suggest that they may provide a suitable alternative, but their accuracy in patients with CKD is unknown. Objective: We compared the accuracy of 4 equations [the Nerbass, INTERSALT (International Cooperative Study on Salt, Other Factors, and Blood Pressure), Tanaka, and Kawasaki equations] that use spot urine sodium to estimate 24-h sodium excretion in patients with moderate to advanced CKD. Design: We evaluated the accuracy of spot urine sodium to predict mean 24-h urine sodium excretion over 9 mo in 129 participants with stage 3–4 CKD. Spot morning urine sodium was used in 4 estimating equations. Bias, precision, and accuracy were assessed and compared across each equation. Results: The mean age of the participants was 67 y, 52% were female, and the mean estimated glomerular filtration rate was 31 ± 9 mL · min–1 · 1.73 m–2. The mean ± SD number of 24-h urine collections was 3.5 ± 0.8/participant, and the mean 24-h sodium excretion was 168.2 ± 67.5 mmol/d. Although the Tanaka equation demonstrated the least bias (mean: −8.2 mmol/d), all 4 equations had poor precision and accuracy. The INTERSALT equation demonstrated the highest accuracy but derived an estimate only within 30% of mean measured sodium excretion in only 57% of observations. Bland-Altman plots revealed systematic bias with the Nerbass, INTERSALT, and Tanaka equations, underestimating sodium excretion when intake was high. Conclusion: These findings do not support the use of spot urine specimens to estimate dietary sodium intake in patients with CKD and research studies enriched with patients with CKD. The parent data for this
Fullerton, Simon; Taylor, Anne W; Dal Grande, Eleonora; Berry, Narelle
2014-01-01
Measures of screen time are often used to assess sedentary behaviour. Participation in activity-based video games (exergames) can contribute to estimates of screen time, as current practices of measuring it do not consider the growing evidence that playing exergames can provide light to moderate levels of physical activity. This study aimed to determine what proportion of time spent playing video games was actually spent playing exergames. Data were collected via a cross-sectional telephone survey in South Australia. Participants aged 18 years and above (n = 2026) were asked about their video game habits, as well as demographic and socioeconomic factors. In cases where children were in the household, the video game habits of a randomly selected child were also questioned. Overall, 31.3% of adults and 79.9% of children spend at least some time playing video games. Of these, 24.1% of adults and 42.1% of children play exergames, with these types of games accounting for a third of all time that adults spend playing video games and nearly 20% of children's video game time. A substantial proportion of time that would usually be classified as "sedentary" may actually be spent participating in light to moderate physical activity.
Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal
2012-01-01
Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of
Accurate Fuel Estimates using CAN Bus Data and 3D Maps
DEFF Research Database (Denmark)
Andersen, Ove; Torp, Kristian
2018-01-01
The focus on reducing CO 2 emissions from the transport sector is larger than ever. Increasingly stricter reductions on fuel consumption and emissions are being introduced by the EU, e.g., to reduce the air pollution in many larger cities. Large sets of high-frequent GPS data from vehicles already...... the accuracy of fuel consumption estimates with up to 40% on hilly roads. There is only very little improvement of the high-precision (H3D) map over the simple 3D map. The fuel consumption estimates are most accurate on flat terrain with average fuel estimates of up to 99% accuracy. The fuel estimates are most...... exist. However, fuel consumption data is still rarely collected even though it is possible to measure the fuel consumption with high accuracy, e.g., using an OBD-II device and a smartphone. This paper, presents a method for comparing fuel-consumption estimates using the SIDRA TRIP model with real fuel...
International Nuclear Information System (INIS)
Kroon, P.S.
2010-09-01
About 30% of the increased greenhouse gas (GHG) emissions of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) are related to land use changes and agricultural activities. In order to select effective measures, knowledge is required about GHG emissions from these ecosystems and how these emissions are influenced by management and meteorological conditions. Accurate emission values are therefore needed for all three GHGs to compile the full GHG balance. However, the current annual estimates of CH4 and N2O emissions from ecosystems have significant uncertainties, even larger than 50%. The present study showed that an advanced technique, micrometeorological eddy covariance flux technique, could obtain more accurate estimates with uncertainties even smaller than 10%. The current regional and global trace gas flux estimates of CH4 and N2O are possibly seriously underestimated due to incorrect measurement procedures. Accurate measurements of both gases are really important since they could even contribute for more than two-third to the total GHG emission. For example: the total GHG emission of a dairy farm site was estimated at 16.10 3 kg ha -1 yr -1 in CO2-equivalents from which 25% and 45% was contributed by CH4 and N2O, respectively. About 60% of the CH4 emission was emitted by ditches and their bordering edges. These emissions are not yet included in the national inventory reports. We recommend including these emissions in coming reports.
A Trace Data-Based Approach for an Accurate Estimation of Precise Utilization Maps in LTE
Directory of Open Access Journals (Sweden)
Almudena Sánchez
2017-01-01
Full Text Available For network planning and optimization purposes, mobile operators make use of Key Performance Indicators (KPIs, computed from Performance Measurements (PMs, to determine whether network performance needs to be improved. In current networks, PMs, and therefore KPIs, suffer from lack of precision due to an insufficient temporal and/or spatial granularity. In this work, an automatic method, based on data traces, is proposed to improve the accuracy of radio network utilization measurements collected in a Long-Term Evolution (LTE network. The method’s output is an accurate estimate of the spatial and temporal distribution for the cell utilization ratio that can be extended to other indicators. The method can be used to improve automatic network planning and optimization algorithms in a centralized Self-Organizing Network (SON entity, since potential issues can be more precisely detected and located inside a cell thanks to temporal and spatial precision. The proposed method is tested with real connection traces gathered in a large geographical area of a live LTE network and considers overload problems due to trace file size limitations, which is a key consideration when analysing a large network. Results show how these distributions provide a very detailed information of network utilization, compared to cell based statistics.
Wang, Huai-Chun; Minh, Bui Quang; Susko, Edward; Roger, Andrew J
2018-03-01
Proteins have distinct structural and functional constraints at different sites that lead to site-specific preferences for particular amino acid residues as the sequences evolve. Heterogeneity in the amino acid substitution process between sites is not modeled by commonly used empirical amino acid exchange matrices. Such model misspecification can lead to artefacts in phylogenetic estimation such as long-branch attraction. Although sophisticated site-heterogeneous mixture models have been developed to address this problem in both Bayesian and maximum likelihood (ML) frameworks, their formidable computational time and memory usage severely limits their use in large phylogenomic analyses. Here we propose a posterior mean site frequency (PMSF) method as a rapid and efficient approximation to full empirical profile mixture models for ML analysis. The PMSF approach assigns a conditional mean amino acid frequency profile to each site calculated based on a mixture model fitted to the data using a preliminary guide tree. These PMSF profiles can then be used for in-depth tree-searching in place of the full mixture model. Compared with widely used empirical mixture models with $k$ classes, our implementation of PMSF in IQ-TREE (http://www.iqtree.org) speeds up the computation by approximately $k$/1.5-fold and requires a small fraction of the RAM. Furthermore, this speedup allows, for the first time, full nonparametric bootstrap analyses to be conducted under complex site-heterogeneous models on large concatenated data matrices. Our simulations and empirical data analyses demonstrate that PMSF can effectively ameliorate long-branch attraction artefacts. In some empirical and simulation settings PMSF provided more accurate estimates of phylogenies than the mixture models from which they derive.
Hwang, Beomsoo; Jeon, Doyoung
2015-04-09
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.
A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors
Directory of Open Access Journals (Sweden)
Beomsoo Hwang
2015-04-01
Full Text Available In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.
DEFF Research Database (Denmark)
Campbell, Duncan J; Nussberger, Juerg; Stowasser, Michael
2009-01-01
into focus the differences in information provided by activity assays and immunoassays for renin and prorenin measurement and has drawn attention to the need for precautions to ensure their accurate measurement. CONTENT: Renin activity assays and immunoassays provide related but different information...... provided by these assays and of the precautions necessary to ensure their accuracy....
Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples
Directory of Open Access Journals (Sweden)
Liu Xin
2015-09-01
Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.
Fast and accurate spectral estimation for online detection of partial broken bar in induction motors
Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti
2018-01-01
In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.
Development of Star Tracker System for Accurate Estimation of Spacecraft Attitude
2009-12-01
For a high- cost spacecraft with accurate pointing requirements, the use of a star tracker is the preferred method for attitude determination. The...solutions, however there are certain costs with using this algorithm. There are significantly more features a triangle can provide when compared to an...to the other. The non-rotating geocentric equatorial frame provides an inertial frame for the two-body problem of a satellite in orbit. In this
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Naeem, Raeece
2012-11-28
Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).
Naeem, Raeece; Rashid, Mamoon; Pain, Arnab
2012-01-01
Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).
International Nuclear Information System (INIS)
Mullen, R.; Thompson, J.M.; Moussa, O.; Vinnicombe, S.; Evans, A.
2014-01-01
Aim: To assess whether the size of peritumoural stiffness (PTS) on shear-wave elastography (SWE) for small primary breast cancers (≤15 mm) was associated with size discrepancies between grey-scale ultrasound (GSUS) and final histological size and whether the addition of PTS size to GSUS size might result in more accurate tumour size estimation when compared to final histological size. Materials and methods: A retrospective analysis of 86 consecutive patients between August 2011 and February 2013 who underwent breast-conserving surgery for tumours of size ≤15 mm at ultrasound was carried out. The size of PTS stiffness was compared to mean GSUS size, mean histological size, and the extent of size discrepancy between GSUS and histology. PTS size and GSUS were combined and compared to the final histological size. Results: PTS of >3 mm was associated with a larger mean final histological size (16 versus 11.3 mm, p < 0.001). PTS size of >3 mm was associated with a higher frequency of underestimation of final histological size by GSUS of >5 mm (63% versus 18%, p < 0.001). The combination of PTS and GSUS size led to accurate estimation of the final histological size (p = 0.03). The size of PTS was not associated with margin involvement (p = 0.27). Conclusion: PTS extending beyond 3 mm from the grey-scale abnormality is significantly associated with underestimation of tumour size of >5 mm for small invasive breast cancers. Taking into account the size of PTS also led to accurate estimation of the final histological size. Further studies are required to assess the relationship of the extent of SWE stiffness and margin status. - Highlights: • Peritumoural stiffness of greater than 3 mm was associated with larger tumour size. • Underestimation of tumour size by ultrasound was associated with peri-tumoural stiffness size. • Combining peri-tumoural stiffness size to ultrasound produced accurate tumour size estimation
Friedman, Lee; Harvey, Robert J.
1986-01-01
Job-naive raters provided with job descriptive information made Position Analysis Questionnaire (PAQ) ratings which were validated against ratings of job analysts who were also job content experts. None of the reduced job descriptive information conditions enabled job-naive raters to obtain either acceptable levels of convergent validity with…
International Nuclear Information System (INIS)
Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup
2015-01-01
In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF
Energy Technology Data Exchange (ETDEWEB)
Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-02-15
In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.
Duhé, Abby F.; Gilmore, L. Anne; Burton, Jeffrey H.; Martin, Corby K.; Redman, Leanne M.
2016-01-01
Background Infant formula is a major source of nutrition for infants with over half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water making it necessary to develop methods that can accurately estimate the amount of powder used prior to reconstitution. Objective To assess the use of the Remote Food Photography Method (RFPM) to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. Methods For each serving size (1-scoop, 2-scoop, 3-scoop, and 4-scoop), a set of seven test bottles and photographs were prepared including the recommended gram weight of powdered formula of the respective serving size by the manufacturer, three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended, and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard RFPM analysis procedures. The ratio estimates and the United States Department of Agriculture (USDA) data tables were used to generate food and nutrient information to provide the RFPM estimates. Statistical Analyses Performed Equivalence testing using the two one-sided t- test (TOST) approach was used to determine equivalence between the actual gram weights and the RFPM estimated weights for all samples, within each serving size, and within under-prepared and over-prepared bottles. Results For all bottles, the gram weights estimated by the RFPM were within 5% equivalence bounds with a slight under-estimation of 0.05 g (90% CI [−0.49, 0.40]; p<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. Conclusion The maximum observed mean error was an overestimation of 1.58% of powdered formula by the RFPM under
MIDAS robust trend estimator for accurate GPS station velocities without step detection
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
GPS Water Vapor Tomography Based on Accurate Estimations of the GPS Tropospheric Parameters
Champollion, C.; Masson, F.; Bock, O.; Bouin, M.; Walpersdorf, A.; Doerflinger, E.; van Baelen, J.; Brenot, H.
2003-12-01
The Global Positioning System (GPS) is now a common technique for the retrieval of zenithal integrated water vapor (IWV). Further applications in meteorology need also slant integrated water vapor (SIWV) which allow to precisely define the high variability of tropospheric water vapor at different temporal and spatial scales. Only precise estimations of IWV and horizontal gradients allow the estimation of accurate SIWV. We present studies developed to improve the estimation of tropospheric water vapor from GPS data. Results are obtained from several field experiments (MAP, ESCOMPTE, OHM-CV, IHOP, .). First IWV are estimated using different GPS processing strategies and results are compared to radiosondes. The role of the reference frame and the a priori constraints on the coordinates of the fiducial and local stations is generally underestimated. It seems to be of first order in the estimation of the IWV. Second we validate the estimated horizontal gradients comparing zenith delay gradients and single site gradients. IWV, gradients and post-fit residuals are used to construct slant integrated water delays. Validation of the SIWV is under progress comparing GPS SIWV, Lidar measurements and high resolution meteorological models (Meso-NH). A careful analysis of the post-fit residuals is needed to separate tropospheric signal from multipaths. The slant tropospheric delays are used to study the 3D heterogeneity of the troposphere. We develop a tomographic software to model the three-dimensional distribution of the tropospheric water vapor from GPS data. The software is applied to the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers operated in southern France. Three inversions have been successfully compared to three successive radiosonde launches. Good resolution is obtained up to heights of 3000 m.
Dingari, Narahara Chari; Horowitz, Gary L; Kang, Jeon Woong; Dasari, Ramachandra R; Barman, Ishan
2012-01-01
We present the first demonstration of glycated albumin detection and quantification using Raman spectroscopy without the addition of reagents. Glycated albumin is an important marker for monitoring the long-term glycemic history of diabetics, especially as its concentrations, in contrast to glycated hemoglobin levels, are unaffected by changes in erythrocyte life times. Clinically, glycated albumin concentrations show a strong correlation with the development of serious diabetes complications including nephropathy and retinopathy. In this article, we propose and evaluate the efficacy of Raman spectroscopy for determination of this important analyte. By utilizing the pre-concentration obtained through drop-coating deposition, we show that glycation of albumin leads to subtle, but consistent, changes in vibrational features, which with the help of multivariate classification techniques can be used to discriminate glycated albumin from the unglycated variant with 100% accuracy. Moreover, we demonstrate that the calibration model developed on the glycated albumin spectral dataset shows high predictive power, even at substantially lower concentrations than those typically encountered in clinical practice. In fact, the limit of detection for glycated albumin measurements is calculated to be approximately four times lower than its minimum physiological concentration. Importantly, in relation to the existing detection methods for glycated albumin, the proposed method is also completely reagent-free, requires barely any sample preparation and has the potential for simultaneous determination of glycated hemoglobin levels as well. Given these key advantages, we believe that the proposed approach can provide a uniquely powerful tool for quantification of glycation status of proteins in biopharmaceutical development as well as for glycemic marker determination in routine clinical diagnostics in the future.
Dingari, Narahara Chari; Horowitz, Gary L.; Kang, Jeon Woong; Dasari, Ramachandra R.; Barman, Ishan
2012-01-01
We present the first demonstration of glycated albumin detection and quantification using Raman spectroscopy without the addition of reagents. Glycated albumin is an important marker for monitoring the long-term glycemic history of diabetics, especially as its concentrations, in contrast to glycated hemoglobin levels, are unaffected by changes in erythrocyte life times. Clinically, glycated albumin concentrations show a strong correlation with the development of serious diabetes complications including nephropathy and retinopathy. In this article, we propose and evaluate the efficacy of Raman spectroscopy for determination of this important analyte. By utilizing the pre-concentration obtained through drop-coating deposition, we show that glycation of albumin leads to subtle, but consistent, changes in vibrational features, which with the help of multivariate classification techniques can be used to discriminate glycated albumin from the unglycated variant with 100% accuracy. Moreover, we demonstrate that the calibration model developed on the glycated albumin spectral dataset shows high predictive power, even at substantially lower concentrations than those typically encountered in clinical practice. In fact, the limit of detection for glycated albumin measurements is calculated to be approximately four times lower than its minimum physiological concentration. Importantly, in relation to the existing detection methods for glycated albumin, the proposed method is also completely reagent-free, requires barely any sample preparation and has the potential for simultaneous determination of glycated hemoglobin levels as well. Given these key advantages, we believe that the proposed approach can provide a uniquely powerful tool for quantification of glycation status of proteins in biopharmaceutical development as well as for glycemic marker determination in routine clinical diagnostics in the future. PMID:22393405
Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography
Energy Technology Data Exchange (ETDEWEB)
Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)
2017-07-15
The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.
Duhé, Abby F; Gilmore, L Anne; Burton, Jeffrey H; Martin, Corby K; Redman, Leanne M
2016-07-01
Infant formula is a major source of nutrition for infants, with more than half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water, making it necessary to develop methods that can accurately estimate the amount of powder used before reconstitution. Our aim was to assess the use of the Remote Food Photography Method to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. For each serving size (1 scoop, 2 scoops, 3 scoops, and 4 scoops), a set of seven test bottles and photographs were prepared as follow: recommended gram weight of powdered formula of the respective serving size by the manufacturer; three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended; and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard Remote Food Photography Method analysis procedures. The ratio estimates and the US Department of Agriculture data tables were used to generate food and nutrient information to provide the Remote Food Photography Method estimates. Equivalence testing using the two one-sided t tests approach was used to determine equivalence between the actual gram weights and the Remote Food Photography Method estimated weights for all samples, within each serving size, and within underprepared and overprepared bottles. For all bottles, the gram weights estimated by the Remote Food Photography Method were within 5% equivalence bounds with a slight underestimation of 0.05 g (90% CI -0.49 to 0.40; P<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. The maximum observed mean error was an overestimation of 1.58% of powdered formula by the Remote
Accurate estimation of motion blur parameters in noisy remote sensing image
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Characterization of a signal recording system for accurate velocity estimation using a VISAR
Rav, Amit; Joshi, K. D.; Singh, Kulbhushan; Kaushik, T. C.
2018-02-01
The linearity of a signal recording system (SRS) in time as well as in amplitude are important for the accurate estimation of the free surface velocity history of a moving target during shock loading and unloading when measured using optical interferometers such as a velocity interferometer system for any reflector (VISAR). Signal recording being the first step in a long sequence of signal processes, the incorporation of errors due to nonlinearity, and low signal-to-noise ratio (SNR) affects the overall accuracy and precision of the estimation of velocity history. In shock experiments the small duration (a few µs) of loading/unloading, the reflectivity of moving target surface, and the properties of optical components, control the amount of input of light to the SRS of a VISAR and this in turn affects the linearity and SNR of the overall measurement. These factors make it essential to develop in situ procedures for (i) minimizing the effect of signal induced noise and (ii) determine the linear region of operation for the SRS. Here we report on a procedure for the optimization of SRS parameters such as photodetector gain, optical power, aperture etc, so as to achieve a linear region of operation with a high SNR. The linear region of operation so determined has been utilized successfully to estimate the temporal history of the free surface velocity of the moving target in shock experiments.
Are rapid population estimates accurate? A field trial of two different assessment methods.
Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent
2006-09-01
Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.
Accurate and fast methods to estimate the population mutation rate from error prone sequences
Directory of Open Access Journals (Sweden)
Miyamoto Michael M
2009-08-01
Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.
Dibble, Kimberly L.; Yard, Micheal D.; Ward, David L.; Yackulic, Charles B.
2017-01-01
Bioelectrical impedance analysis (BIA) is a nonlethal tool with which to estimate the physiological condition of animals that has potential value in research on endangered species. However, the effectiveness of BIA varies by species, the methodology continues to be refined, and incidental mortality rates are unknown. Under laboratory conditions we tested the value of using BIA in addition to morphological measurements such as total length and wet mass to estimate proximate composition (lipid, protein, ash, water, dry mass, energy density) in the endangered Humpback Chub Gila cypha and Bonytail G. elegans and the species of concern Roundtail Chub G. robusta and conducted separate trials to estimate the mortality rates of these sensitive species. Although Humpback and Roundtail Chub exhibited no or low mortality in response to taking BIA measurements versus handling for length and wet-mass measurements, Bonytails exhibited 14% and 47% mortality in the BIA and handling experiments, respectively, indicating that survival following stress is species specific. Derived BIA measurements were included in the best models for most proximate components; however, the added value of BIA as a predictor was marginal except in the absence of accurate wet-mass data. Bioelectrical impedance analysis improved the R2 of the best percentage-based models by no more than 4% relative to models based on morphology. Simulated field conditions indicated that BIA models became increasingly better than morphometric models at estimating proximate composition as the observation error around wet-mass measurements increased. However, since the overall proportion of variance explained by percentage-based models was low and BIA was mostly a redundant predictor, we caution against the use of BIA in field applications for these sensitive fish species.
Directory of Open Access Journals (Sweden)
Marion Hoehn
Full Text Available The effective population size (N(e is proportional to the loss of genetic diversity and the rate of inbreeding, and its accurate estimation is crucial for the monitoring of small populations. Here, we integrate temporal studies of the gecko Oedura reticulata, to compare genetic and demographic estimators of N(e. Because geckos have overlapping generations, our goal was to demographically estimate N(bI, the inbreeding effective number of breeders and to calculate the N(bI/N(a ratio (N(a =number of adults for four populations. Demographically estimated N(bI ranged from 1 to 65 individuals. The mean reduction in the effective number of breeders relative to census size (N(bI/N(a was 0.1 to 1.1. We identified the variance in reproductive success as the most important variable contributing to reduction of this ratio. We used four methods to estimate the genetic based inbreeding effective number of breeders N(bI(gen and the variance effective populations size N(eV(gen estimates from the genotype data. Two of these methods - a temporal moment-based (MBT and a likelihood-based approach (TM3 require at least two samples in time, while the other two were single-sample estimators - the linkage disequilibrium method with bias correction LDNe and the program ONeSAMP. The genetic based estimates were fairly similar across methods and also similar to the demographic estimates excluding those estimates, in which upper confidence interval boundaries were uninformative. For example, LDNe and ONeSAMP estimates ranged from 14-55 and 24-48 individuals, respectively. However, temporal methods suffered from a large variation in confidence intervals and concerns about the prior information. We conclude that the single-sample estimators are an acceptable short-cut to estimate N(bI for species such as geckos and will be of great importance for the monitoring of species in fragmented landscapes.
Fast and Accurate Video PQoS Estimation over Wireless Networks
Directory of Open Access Journals (Sweden)
Emanuele Viterbo
2008-06-01
Full Text Available This paper proposes a curve fitting technique for fast and accurate estimation of the perceived quality of streaming media contents, delivered within a wireless network. The model accounts for the effects of various network parameters such as congestion, radio link power, and video transmission bit rate. The evaluation of the perceived quality of service (PQoS is based on the well-known VQM objective metric, a powerful technique which is highly correlated to the more expensive and time consuming subjective metrics. Currently, PQoS is used only for offline analysis after delivery of the entire video content. Thanks to the proposed simple model, we can estimate in real time the video PQoS and we can rapidly adapt the content transmission through scalable video coding and bit rates in order to offer the best perceived quality to the end users. The designed model has been validated through many different measurements in realistic wireless environments using an ad hoc WiFi test bed.
Do wavelet filters provide more accurate estimates of reverberation times at low frequencies
DEFF Research Database (Denmark)
Sobreira Seoane, Manuel A.; Pérez Cabo, David; Agerkvist, Finn T.
2016-01-01
It has been amply demonstrated in the literature that it is not possible to measure acoustic decays without significant errors for low BT values (narrow filters and or low reverberation times). Recently, it has been shown how the main source of distortion in the time envelope of the acoustic deca...
McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F
2017-08-01
The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating
Accurate halo-galaxy mocks from automatic bias estimation and particle mesh gravity solvers
Vakili, Mohammadjavad; Kitaura, Francisco-Shu; Feng, Yu; Yepes, Gustavo; Zhao, Cheng; Chuang, Chia-Hsun; Hahn, ChangHoon
2017-12-01
Reliable extraction of cosmological information from clustering measurements of galaxy surveys requires estimation of the error covariance matrices of observables. The accuracy of covariance matrices is limited by our ability to generate sufficiently large number of independent mock catalogues that can describe the physics of galaxy clustering across a wide range of scales. Furthermore, galaxy mock catalogues are required to study systematics in galaxy surveys and to test analysis tools. In this investigation, we present a fast and accurate approach for generation of mock catalogues for the upcoming galaxy surveys. Our method relies on low-resolution approximate gravity solvers to simulate the large-scale dark matter field, which we then populate with haloes according to a flexible non-linear and stochastic bias model. In particular, we extend the PATCHY code with an efficient particle mesh algorithm to simulate the dark matter field (the FASTPM code), and with a robust MCMC method relying on the EMCEE code for constraining the parameters of the bias model. Using the haloes in the BigMultiDark high-resolution N-body simulation as a reference catalogue, we demonstrate that our technique can model the bivariate probability distribution function (counts-in-cells), power spectrum and bispectrum of haloes in the reference catalogue. Specifically, we show that the new ingredients permit us to reach percentage accuracy in the power spectrum up to k ∼ 0.4 h Mpc-1 (within 5 per cent up to k ∼ 0.6 h Mpc-1) with accurate bispectra improving previous results based on Lagrangian perturbation theory.
Zavitsas, Andreas A
2012-08-23
Viscosities of aqueous solutions of many highly soluble hydrophilic solutes with hydroxyl and amino groups are examined with a focus on improving the concentration range over which Einstein's relationship between solution viscosity and solute volume, V, is applicable accurately. V is the hydrodynamic effective volume of the solute, including any water strongly bound to it and acting as a single entity with it. The widespread practice is to relate the relative viscosity of solute to solvent, η/η(0), to V/V(tot), where V(tot) is the total volume of the solution. For solutions that are not infinitely dilute, it is shown that the volume ratio must be expressed as V/V(0), where V(0) = V(tot) - V. V(0) is the volume of water not bound to the solute, the "free" water solvent. At infinite dilution, V/V(0) = V/V(tot). For the solutions examined, the proportionality constant between the relative viscosity and volume ratio is shown to be 2.9, rather than the 2.5 commonly used. To understand the phenomena relating to viscosity, the hydrodynamic effective volume of water is important. It is estimated to be between 54 and 85 cm(3). With the above interpretations of Einstein's equation, which are consistent with his stated reasoning, the relation between the viscosity and volume ratio remains accurate to much higher concentrations than those attainable with any of the other relations examined that express the volume ratio as V/V(tot).
How accurately can we estimate energetic costs in a marine top predator, the king penguin?
Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J
2007-01-01
King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.
Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle
Timinis, Constantinos; Pitris, Costas
2016-03-01
The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.
Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul
2015-01-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
Directory of Open Access Journals (Sweden)
Abel B Minyoo
2015-12-01
Full Text Available In this study we show that incentives (dog collars and owner wristbands are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey. Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.
Weiss, N; Venot, M; Verdonk, F; Chardon, A; Le Guennec, L; Llerena, M C; Raimbourg, Q; Taldir, G; Luque, Y; Fagon, J-Y; Guerot, E; Diehl, J-L
2015-05-01
The accurate prediction of outcome after out-of-hospital cardiac arrest (OHCA) is of major importance. The recently described Full Outline of UnResponsiveness (FOUR) is well adapted to mechanically ventilated patients and does not depend on verbal response. To evaluate the ability of FOUR assessed by intensivists to accurately predict outcome in OHCA. We prospectively identified patients admitted for OHCA with a Glasgow Coma Scale below 8. Neurological assessment was performed daily. Outcome was evaluated at 6 months using Glasgow-Pittsburgh Cerebral Performance Categories (GP-CPC). Eighty-five patients were included. At 6 months, 19 patients (22%) had a favorable outcome, GP-CPC 1-2, and 66 (78%) had an unfavorable outcome, GP-CPC 3-5. Compared to both brainstem responses at day 3 and evolution of Glasgow Coma Scale, evolution of FOUR score over the three first days was able to predict unfavorable outcome more precisely. Thus, absence of improvement or worsening from day 1 to day 3 of FOUR had 0.88 (0.79-0.97) specificity, 0.71 (0.66-0.76) sensitivity, 0.94 (0.84-1.00) PPV and 0.54 (0.49-0.59) NPV to predict unfavorable outcome. Similarly, the brainstem response of FOUR score at 0 evaluated at day 3 had 0.94 (0.89-0.99) specificity, 0.60 (0.50-0.70) sensitivity, 0.96 (0.92-1.00) PPV and 0.47 (0.37-0.57) NPV to predict unfavorable outcome. The absence of improvement or worsening from day 1 to day 3 of FOUR evaluated by intensivists provides an accurate prognosis of poor neurological outcome in OHCA. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Frazee, Richard C; Matejicka, Anthony V; Abernathy, Stephen W; Davis, Matthew; Isbell, Travis S; Regner, Justin L; Smith, Randall W; Jupiter, Daniel C; Papaconstantinou, Harry T
2015-04-01
Case mix index (CMI) is calculated to determine the relative value assigned to a Diagnosis-Related Group. Accurate documentation of patient complications and comorbidities and major complications and comorbidities changes CMI and can affect hospital reimbursement and future pay for performance metrics. Starting in 2010, a physician panel concurrently reviewed the documentation of the trauma/acute care surgeons. Clarifications of the Centers for Medicare and Medicaid Services term-specific documentation were made by the panel, and the surgeon could incorporate or decline the clinical queries. A retrospective review of trauma/acute care inpatients was performed. The mean severity of illness, risk of mortality, and CMI from 2009 were compared with the 3 subsequent years. Mean length of stay and mean Injury Severity Score by year were listed as measures of patient acuity. Statistical analysis was performed using ANOVA and t-test, with p reimbursement and more accurately stratify outcomes measures for care providers. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Lin, Y; Rajan, V; Moret, B M E
2011-09-01
The rapid accumulation of whole-genome data has renewed interest in the study of genomic rearrangements. Comparative genomics, evolutionary biology, and cancer research all require models and algorithms to elucidate the mechanisms, history, and consequences of these rearrangements. However, even simple models lead to NP-hard problems, particularly in the area of phylogenetic analysis. Current approaches are limited to small collections of genomes and low-resolution data (typically a few hundred syntenic blocks). Moreover, whereas phylogenetic analyses from sequence data are deemed incomplete unless bootstrapping scores (a measure of confidence) are given for each tree edge, no equivalent to bootstrapping exists for rearrangement-based phylogenetic analysis. We describe a fast and accurate algorithm for rearrangement analysis that scales up, in both time and accuracy, to modern high-resolution genomic data. We also describe a novel approach to estimate the robustness of results-an equivalent to the bootstrapping analysis used in sequence-based phylogenetic reconstruction. We present the results of extensive testing on both simulated and real data showing that our algorithm returns very accurate results, while scaling linearly with the size of the genomes and cubically with their number. We also present extensive experimental results showing that our approach to robustness testing provides excellent estimates of confidence, which, moreover, can be tuned to trade off thresholds between false positives and false negatives. Together, these two novel approaches enable us to attack heretofore intractable problems, such as phylogenetic inference for high-resolution vertebrate genomes, as we demonstrate on a set of six vertebrate genomes with 8,380 syntenic blocks. A copy of the software is available on demand.
Estimating the Cost of Providing Foundational Public Health Services.
Mamaril, Cezar Brian C; Mays, Glen P; Branham, Douglas Keith; Bekemeier, Betty; Marlowe, Justin; Timsina, Lava
2017-12-28
To estimate the cost of resources required to implement a set of Foundational Public Health Services (FPHS) as recommended by the Institute of Medicine. A stochastic simulation model was used to generate probability distributions of input and output costs across 11 FPHS domains. We used an implementation attainment scale to estimate costs of fully implementing FPHS. We use data collected from a diverse cohort of 19 public health agencies located in three states that implemented the FPHS cost estimation methodology in their agencies during 2014-2015. The average agency incurred costs of $48 per capita implementing FPHS at their current attainment levels with a coefficient of variation (CV) of 16 percent. Achieving full FPHS implementation would require $82 per capita (CV=19 percent), indicating an estimated resource gap of $34 per capita. Substantial variation in costs exists across communities in resources currently devoted to implementing FPHS, with even larger variation in resources needed for full attainment. Reducing geographic inequities in FPHS may require novel financing mechanisms and delivery models that allow health agencies to have robust roles within the health system and realize a minimum package of public health services for the nation. © Health Research and Educational Trust.
An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.
2017-01-01
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
Shuai, Wei; Wang, Xi-Xing; Hong, Kui; Peng, Qiang; Li, Ju-Xiang; Li, Ping; Chen, Jing; Cheng, Xiao-Shu; Su, Hai
2016-07-15
At present, the estimation of rest heart rate (HR) in atrial fibrillation (AF) is obtained by apical auscultation for 1min or on the surface electrocardiogram (ECG) by multiplying the number of RR intervals on the 10second recording by six. But the reasonability of 10second ECG recording is controversial. ECG was continuously recorded at rest for 60s to calculate the real rest HR (HR60s). Meanwhile, the first 10s and 30s ECG recordings were used for calculating HR10s (sixfold) and HR30s (twofold). The differences of HR10s or HR30s with the HR60s were compared. The patients were divided into three sub-groups on the HR60s 100bpm. No significant difference among the mean HR10s, HR30s and HR60s was found. A positive correlation existed between HR10s and HR60s or HR30s and HR60s. Bland-Altman plot showed that the 95% reference limits were high as -11.0 to 16.0bpm for HR10s, but for HR30s these values were only -4.5 to 5.2bpm. Among the three subgroups with HR60s 100bpm, the 95% reference limits with HR60s were -8.9 to 10.6, -10.5 to 14.0 and -11.3 to 21.7bpm for HR10s, but these values were -3.9 to 4.3, -4.1 to 4.6 and -5.3 to 6.7bpm for HR30s. As 10s ECG recording could not provide clinically accepted estimation HR, ECG should be recorded at least for 30s in the patients with AF. It is better to record ECG for 60s when the HR is rapid. Copyright © 2016. Published by Elsevier Ireland Ltd.
Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging
DEFF Research Database (Denmark)
Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer
2016-01-01
This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using...
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances
Directory of Open Access Journals (Sweden)
Manuel Gil
2014-09-01
Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.
Gil, Manuel
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
An accurate estimation and optimization of bottom hole back pressure in managed pressure drilling
Directory of Open Access Journals (Sweden)
Boniface Aleruchi ORIJI
2017-06-01
Full Text Available Managed Pressure Drilling (MPD utilizes a method of applying back pressure to compensate for wellbore pressure losses during drilling. Using a single rheological (Annular Frictional Pressure Losses, AFPL model to estimate the backpressure in MPD operations for all sections of the well may not yield the best result. Each section of the hole was therefore treated independently in this study as data from a case study well were used. As the backpressure is a function of hydrostatic pressure, pore pressure and AFPL, three AFPL models (Bingham plastic, Power law and Herschel Bulkley models were utilized in estimating the backpressure. The estimated backpressure values were compared to the actual field backpressure values in order to obtain the optimum backpressure at the various well depths. The backpressure values estimated by utilizing the power law AFPL model gave the best result for the 12 1/4" hole section (average error % of 1.855% while the back pressures estimated by utilizing the Herschel Bulkley AFPL model gave the best result for the 8 1/2" hole section (average error % of 12.3%. The study showed that for hole sections of turbulent annular flow, the power law AFPL model fits best for estimating the required backpressure while for hole sections of laminar annular flow, the Herschel Bulkley AFPL model fits best for estimating the required backpressure.
Yang, Yanfu; Xiang, Qian; Zhang, Qun; Zhou, Zhongqing; Jiang, Wen; He, Qianwen; Yao, Yong
2017-09-01
We propose a joint estimation scheme for fast, accurate, and robust frequency offset (FO) estimation along with phase estimation based on modified adaptive Kalman filter (MAKF). The scheme consists of three key modules: extend Kalman filter (EKF), lock detector, and FO cycle slip recovery. The EKF module estimates time-varying phase induced by both FO and laser phase noise. The lock detector module makes decision between acquisition mode and tracking mode and consequently sets the EKF tuning parameter in an adaptive manner. The third module can detect possible cycle slip in the case of large FO and make proper correction. Based on the simulation and experimental results, the proposed MAKF has shown excellent estimation performance featuring high accuracy, fast convergence, as well as the capability of cycle slip recovery.
Gender differences in pension wealth: estimates using provider data.
Johnson, R W; Sambamoorthi, U; Crystal, S
1999-06-01
Information from pension providers was examined to investigate gender differences in pension wealth at midlife. For full-time wage and salary workers approaching retirement age who had pension coverage, median pension wealth on the current job was 76% greater for men than women. Differences in wages, years of job tenure, and industry between men and women accounted for most of the gender gap in pension wealth on the current job. Less than one third of the wealth difference could not be explained by gender differences in education, demographics, or job characteristics. The less-advantaged employment situation of working women currently in midlife carries over into worse retirement income prospects. However, the gender gap in pensions is likely to narrow in the future as married women's employment experiences increasingly resemble those of men.
Directory of Open Access Journals (Sweden)
Rahmann Sven
2004-06-01
Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.
DEFF Research Database (Denmark)
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...
How to efficiently obtain accurate estimates of flower visitation rates by pollinators
Fijen, Thijs P.M.; Kleijn, David
2017-01-01
Regional declines in insect pollinators have raised concerns about crop pollination. Many pollinator studies use visitation rate (pollinators/time) as a proxy for the quality of crop pollination. Visitation rate estimates are based on observation durations that vary significantly between studies.
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.
A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates
Hagedorn, G A
2004-01-01
We consider a simple molecular--type quantum system in which the nuclei have one degree of freedom and the electrons have two levels. The Hamiltonian has the form \\[ H(\\epsilon)\\ =\\ -\\,\\frac{\\epsilon^4}2\\, \\frac{\\partial^2\\phantom{i}}{\\partial y^2}\\ +\\ h(y), \\] where $h(y)$ is a $2\\times 2$ real symmetric matrix. Near a local minimum of an electron level ${\\cal E}(y)$ that is not at a level crossing, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer parameter $\\epsilon$ by optimal truncation of the Rayleigh--Schr\\"odinger series. That is, we construct $E_\\epsilon$ and $\\Psi_\\epsilon$, such that $\\|\\Psi_\\epsilon\\|\\,=\\,O(1)$ and \\[ \\|\\,(H(\\epsilon)\\,-\\,E_\\epsilon))\\,\\Psi_\\epsilon\\,\\|\\ 0. \\
Accurate estimation of dose distributions inside an eye irradiated with {sup 106}Ru plaques
Energy Technology Data Exchange (ETDEWEB)
Brualla, L.; Sauerwein, W. [Universitaetsklinikum Essen (Germany). NCTeam, Strahlenklinik; Sempau, J.; Zaragoza, F.J. [Universitat Politecnica de Catalunya, Barcelona (Spain). Inst. de Tecniques Energetiques; Wittig, A. [Marburg Univ. (Germany). Klinik fuer Strahlentherapie und Radioonkologie
2013-01-15
Background: Irradiation of intraocular tumors requires dedicated techniques, such as brachytherapy with {sup 106}Ru plaques. The currently available treatment planning system relies on the assumption that the eye is a homogeneous water sphere and on simplified radiation transport physics. However, accurate dose distributions and their assessment demand better models for both the eye and the physics. Methods: The Monte Carlo code PENELOPE, conveniently adapted to simulate the beta decay of {sup 106}Ru over {sup 106}Rh into {sup 106}Pd, was used to simulate radiation transport based on a computerized tomography scan of a patient's eye. A detailed geometrical description of two plaques (models CCA and CCB) from the manufacturer BEBIG was embedded in the computerized tomography scan. Results: The simulations were firstly validated by comparison with experimental results in a water phantom. Dose maps were computed for three plaque locations on the eyeball. From these maps, isodose curves and cumulative dose-volume histograms in the eye and for the structures at risk were assessed. For example, it was observed that a 4-mm anterior displacement with respect to a posterior placement of a CCA plaque for treating a posterior tumor would reduce from 40 to 0% the volume of the optic disc receiving more than 80 Gy. Such a small difference in anatomical position leads to a change in the dose that is crucial for side effects, especially with respect to visual acuity. The radiation oncologist has to bring these large changes in absorbed dose in the structures at risk to the attention of the surgeon, especially when the plaque has to be positioned close to relevant tissues. Conclusion: The detailed geometry of an eye plaque in computerized and segmented tomography of a realistic patient phantom was simulated accurately. Dose-volume histograms for relevant anatomical structures of the eye and the orbit were obtained with unprecedented accuracy. This represents an important step
CUFID-query: accurate network querying through random walk based network flow estimation.
Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun
2017-12-28
Functional modules in biological networks consist of numerous biomolecules and their complicated interactions. Recent studies have shown that biomolecules in a functional module tend to have similar interaction patterns and that such modules are often conserved across biological networks of different species. As a result, such conserved functional modules can be identified through comparative analysis of biological networks. In this work, we propose a novel network querying algorithm based on the CUFID (Comparative network analysis Using the steady-state network Flow to IDentify orthologous proteins) framework combined with an efficient seed-and-extension approach. The proposed algorithm, CUFID-query, can accurately detect conserved functional modules as small subnetworks in the target network that are expected to perform similar functions to the given query functional module. The CUFID framework was recently developed for probabilistic pairwise global comparison of biological networks, and it has been applied to pairwise global network alignment, where the framework was shown to yield accurate network alignment results. In the proposed CUFID-query algorithm, we adopt the CUFID framework and extend it for local network alignment, specifically to solve network querying problems. First, in the seed selection phase, the proposed method utilizes the CUFID framework to compare the query and the target networks and to predict the probabilistic node-to-node correspondence between the networks. Next, the algorithm selects and greedily extends the seed in the target network by iteratively adding nodes that have frequent interactions with other nodes in the seed network, in a way that the conductance of the extended network is maximally reduced. Finally, CUFID-query removes irrelevant nodes from the querying results based on the personalized PageRank vector for the induced network that includes the fully extended network and its neighboring nodes. Through extensive
Accurate estimation of dose distributions inside an eye irradiated with 106Ru plaques
International Nuclear Information System (INIS)
Brualla, L.; Sauerwein, W.; Sempau, J.; Zaragoza, F.J.; Wittig, A.
2013-01-01
Background: Irradiation of intraocular tumors requires dedicated techniques, such as brachytherapy with 106 Ru plaques. The currently available treatment planning system relies on the assumption that the eye is a homogeneous water sphere and on simplified radiation transport physics. However, accurate dose distributions and their assessment demand better models for both the eye and the physics. Methods: The Monte Carlo code PENELOPE, conveniently adapted to simulate the beta decay of 106 Ru over 106 Rh into 106 Pd, was used to simulate radiation transport based on a computerized tomography scan of a patient's eye. A detailed geometrical description of two plaques (models CCA and CCB) from the manufacturer BEBIG was embedded in the computerized tomography scan. Results: The simulations were firstly validated by comparison with experimental results in a water phantom. Dose maps were computed for three plaque locations on the eyeball. From these maps, isodose curves and cumulative dose-volume histograms in the eye and for the structures at risk were assessed. For example, it was observed that a 4-mm anterior displacement with respect to a posterior placement of a CCA plaque for treating a posterior tumor would reduce from 40 to 0% the volume of the optic disc receiving more than 80 Gy. Such a small difference in anatomical position leads to a change in the dose that is crucial for side effects, especially with respect to visual acuity. The radiation oncologist has to bring these large changes in absorbed dose in the structures at risk to the attention of the surgeon, especially when the plaque has to be positioned close to relevant tissues. Conclusion: The detailed geometry of an eye plaque in computerized and segmented tomography of a realistic patient phantom was simulated accurately. Dose-volume histograms for relevant anatomical structures of the eye and the orbit were obtained with unprecedented accuracy. This represents an important step toward an optimized
Directory of Open Access Journals (Sweden)
Simon Fullerton
2014-01-01
Full Text Available Background. Measures of screen time are often used to assess sedentary behaviour. Participation in activity-based video games (exergames can contribute to estimates of screen time, as current practices of measuring it do not consider the growing evidence that playing exergames can provide light to moderate levels of physical activity. This study aimed to determine what proportion of time spent playing video games was actually spent playing exergames. Methods. Data were collected via a cross-sectional telephone survey in South Australia. Participants aged 18 years and above (n=2026 were asked about their video game habits, as well as demographic and socioeconomic factors. In cases where children were in the household, the video game habits of a randomly selected child were also questioned. Results. Overall, 31.3% of adults and 79.9% of children spend at least some time playing video games. Of these, 24.1% of adults and 42.1% of children play exergames, with these types of games accounting for a third of all time that adults spend playing video games and nearly 20% of children’s video game time. Conclusions. A substantial proportion of time that would usually be classified as “sedentary” may actually be spent participating in light to moderate physical activity.
Plant DNA barcodes can accurately estimate species richness in poorly known floras.
Directory of Open Access Journals (Sweden)
Craig Costion
Full Text Available BACKGROUND: Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70% and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. METHODOLOGY/PRINCIPAL FINDINGS: Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. CONCLUSIONS/SIGNIFICANCE: We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.
Lumme, E.; Pomoell, J.; Kilpua, E. K. J.
2017-12-01
Estimates of the photospheric magnetic, electric, and plasma velocity fields are essential for studying the dynamics of the solar atmosphere, for example through the derivative quantities of Poynting and relative helicity flux and using the fields to obtain the lower boundary condition for data-driven coronal simulations. In this paper we study the performance of a data processing and electric field inversion approach that requires only high-resolution and high-cadence line-of-sight or vector magnetograms, which we obtain from the Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO). The approach does not require any photospheric velocity estimates, and the lacking velocity information is compensated for using ad hoc assumptions. We show that the free parameters of these assumptions can be optimized to reproduce the time evolution of the total magnetic energy injection through the photosphere in NOAA AR 11158, when compared to recent state-of-the-art estimates for this active region. However, we find that the relative magnetic helicity injection is reproduced poorly, reaching at best a modest underestimation. We also discuss the effect of some of the data processing details on the results, including the masking of the noise-dominated pixels and the tracking method of the active region, neither of which has received much attention in the literature so far. In most cases the effect of these details is small, but when the optimization of the free parameters of the ad hoc assumptions is considered, a consistent use of the noise mask is required. The results found in this paper imply that the data processing and electric field inversion approach that uses only the photospheric magnetic field information offers a flexible and straightforward way to obtain photospheric magnetic and electric field estimates suitable for practical applications such as coronal modeling studies.
Accurate estimation of camera shot noise in the real-time
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.
2017-10-01
Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the
Plant DNA barcodes can accurately estimate species richness in poorly known floras.
Costion, Craig; Ford, Andrew; Cross, Hugh; Crayn, Darren; Harrington, Mark; Lowe, Andrew
2011-01-01
Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70%) and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.
A practical way to estimate retail tobacco sales violation rates more accurately.
Levinson, Arnold H; Patnaik, Jennifer L
2013-11-01
U.S. states annually estimate retailer propensity to sell adolescents cigarettes, which is a violation of law, by staging a single purchase attempt among a random sample of tobacco businesses. The accuracy of single-visit estimates is unknown. We examined this question using a novel test-retest protocol. Supervised minors attempted to purchase cigarettes at all retail tobacco businesses located in 3 Colorado counties. The attempts observed federal standards: Minors were aged 15-16 years, were nonsmokers, and were free of visible tattoos and piercings, and were allowed to enter stores alone or in pairs to purchase a small item while asking for cigarettes and to show or not show genuine identification (ID, e.g., driver's license). Unlike federal standards, stores received a second purchase attempt within a few days unless minors were firmly told not to return. Separate violation rates were calculated for first visits, second visits, and either visit. Eleven minors attempted to purchase cigarettes 1,079 times from 671 retail businesses. One sixth of first visits (16.8%) resulted in a violation; the rate was similar for second visits (15.7%). Considering either visit, 25.3% of businesses failed the test. Factors predictive of violation were whether clerks asked for ID, whether the clerks closely examined IDs, and whether minors included snacks or soft drinks in cigarette purchase attempts. A test-retest protocol for estimating underage cigarette sales detected half again as many businesses in violation as the federally approved one-test protocol. Federal policy makers should consider using the test-retest protocol to increase accuracy and awareness of widespread adolescent access to cigarettes through retail businesses.
Tsujimura, Kazuma; Ota, Morihito; Chinen, Kiyoshi; Adachi, Takayuki; Nagayama, Kiyomitsu; Oroku, Masato; Nishihira, Morikuni; Shiohira, Yoshiki; Iseki, Kunitoshi; Ishida, Hideki; Tanabe, Kazunari
2017-06-23
BACKGROUND Precise evaluation of a living donor's renal function is necessary to ensure adequate residual kidney function after donor nephrectomy. Our aim was to evaluate the feasibility of estimating glomerular filtration rate (GFR) using serum cystatin-C prior to kidney transplantation. MATERIAL AND METHODS Using the equations of the Japanese Society of Nephrology, we calculated the GFR using serum creatinine (eGFRcre) and cystatin C levels (eGFRcys) for 83 living kidney donors evaluated between March 2010 and March 2016. We compared eGFRcys and eGFRcre values against the creatinine clearance rate (CCr). RESULTS The study population included 27 males and 56 females. The mean eGFRcys, eGFRcre, and CCr were, 91.4±16.3 mL/min/1.73 m² (range, 59.9-128.9 mL/min/1.73 m²), 81.5±14.2 mL/min/1.73 m² (range, 55.4-117.5 mL/min/1.73 m²) and 108.4±21.6 mL/min/1.73 m² (range, 63.7-168.7 mL/min/1.73 m²), respectively. eGFRcys was significantly lower than CCr (p<0.001). The correlation coefficient between eGFRcys and CCr values was 0.466, and the mean difference between the two values was -17.0 (15.7%), with a root mean square error of 19.2. Thus, eGFRcre was significantly lower than CCr (p<0.001). The correlation coefficient between eGFRcre and CCr values was 0.445, and the mean difference between the two values was -26.9 (24.8%), with a root mean square error of 19.5. CONCLUSIONS Although eGFRcys provided a better estimation of GFR than eGFRcre, eGFRcys still did not provide an accurate measure of kidney function in Japanese living kidney donors.
Accurate estimate of the relic density and the kinetic decoupling in nonthermal dark matter models
International Nuclear Information System (INIS)
Arcadi, Giorgio; Ullio, Piero
2011-01-01
Nonthermal dark matter generation is an appealing alternative to the standard paradigm of thermal WIMP dark matter. We reconsider nonthermal production mechanisms in a systematic way, and develop a numerical code for accurate computations of the dark matter relic density. We discuss, in particular, scenarios with long-lived massive states decaying into dark matter particles, appearing naturally in several beyond the standard model theories, such as supergravity and superstring frameworks. Since nonthermal production favors dark matter candidates with large pair annihilation rates, we analyze the possible connection with the anomalies detected in the lepton cosmic-ray flux by Pamela and Fermi. Concentrating on supersymmetric models, we consider the effect of these nonstandard cosmologies in selecting a preferred mass scale for the lightest supersymmetric particle as a dark matter candidate, and the consequent impact on the interpretation of new physics discovered or excluded at the LHC. Finally, we examine a rather predictive model, the G2-MSSM, investigating some of the standard assumptions usually implemented in the solution of the Boltzmann equation for the dark matter component, including coannihilations. We question the hypothesis that kinetic equilibrium holds along the whole phase of dark matter generation, and the validity of the factorization usually implemented to rewrite the system of a coupled Boltzmann equation for each coannihilating species as a single equation for the sum of all the number densities. As a byproduct we develop here a formalism to compute the kinetic decoupling temperature in case of coannihilating particles, which can also be applied to other particle physics frameworks, and also to standard thermal relics within a standard cosmology.
Accurate estimation of short read mapping quality for next-generation genome sequencing
Ruffalo, Matthew; Koyutürk, Mehmet; Ray, Soumya; LaFramboise, Thomas
2012-01-01
Motivation: Several software tools specialize in the alignment of short next-generation sequencing reads to a reference sequence. Some of these tools report a mapping quality score for each alignment—in principle, this quality score tells researchers the likelihood that the alignment is correct. However, the reported mapping quality often correlates weakly with actual accuracy and the qualities of many mappings are underestimated, encouraging the researchers to discard correct mappings. Further, these low-quality mappings tend to correlate with variations in the genome (both single nucleotide and structural), and such mappings are important in accurately identifying genomic variants. Approach: We develop a machine learning tool, LoQuM (LOgistic regression tool for calibrating the Quality of short read mappings, to assign reliable mapping quality scores to mappings of Illumina reads returned by any alignment tool. LoQuM uses statistics on the read (base quality scores reported by the sequencer) and the alignment (number of matches, mismatches and deletions, mapping quality score returned by the alignment tool, if available, and number of mappings) as features for classification and uses simulated reads to learn a logistic regression model that relates these features to actual mapping quality. Results: We test the predictions of LoQuM on an independent dataset generated by the ART short read simulation software and observe that LoQuM can ‘resurrect’ many mappings that are assigned zero quality scores by the alignment tools and are therefore likely to be discarded by researchers. We also observe that the recalibration of mapping quality scores greatly enhances the precision of called single nucleotide polymorphisms. Availability: LoQuM is available as open source at http://compbio.case.edu/loqum/. Contact: matthew.ruffalo@case.edu. PMID:22962451
49 CFR 375.409 - May household goods brokers provide estimates?
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false May household goods brokers provide estimates? 375... Estimating Charges § 375.409 May household goods brokers provide estimates? A household goods broker must not... there is a written agreement between the broker and you, the carrier, adopting the broker's estimate as...
Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
Interbank Market Structure and Accurate Estimation of an Aggregate Liquidity Shock
Isakov, A.
2013-01-01
It's customary among money market analysts to blame interest rate deviations from the Bank of Russia's target band on the market structure imperfections or segmentation. We isolate one form of such market imperfection and provide an illustration of its potential impact on central bank's open market operations efficiency in the current monetary policy framework. We then hypothesize that naive (market) structure-agnostic liquidity gap aggregation will lead to market demand underestimation in so...
An automated A-value measurement tool for accurate cochlear duct length estimation.
Iyaniwura, John E; Elfarnawany, Mai; Ladak, Hanif M; Agrawal, Sumit K
2018-01-22
There has been renewed interest in the cochlear duct length (CDL) for preoperative cochlear implant electrode selection and postoperative generation of patient-specific frequency maps. The CDL can be estimated by measuring the A-value, which is defined as the length between the round window and the furthest point on the basal turn. Unfortunately, there is significant intra- and inter-observer variability when these measurements are made clinically. The objective of this study was to develop an automated A-value measurement algorithm to improve accuracy and eliminate observer variability. Clinical and micro-CT images of 20 cadaveric cochleae specimens were acquired. The micro-CT of one sample was chosen as the atlas, and A-value fiducials were placed onto that image. Image registration (rigid affine and non-rigid B-spline) was applied between the atlas and the 19 remaining clinical CT images. The registration transform was applied to the A-value fiducials, and the A-value was then automatically calculated for each specimen. High resolution micro-CT images of the same 19 specimens were used to measure the gold standard A-values for comparison against the manual and automated methods. The registration algorithm had excellent qualitative overlap between the atlas and target images. The automated method eliminated the observer variability and the systematic underestimation by experts. Manual measurement of the A-value on clinical CT had a mean error of 9.5 ± 4.3% compared to micro-CT, and this improved to an error of 2.7 ± 2.1% using the automated algorithm. Both the automated and manual methods correlated significantly with the gold standard micro-CT A-values (r = 0.70, p value measurement tool using atlas-based registration methods was successfully developed and validated. The automated method eliminated the observer variability and improved accuracy as compared to manual measurements by experts. This open-source tool has the potential to benefit
Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.
Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke
2013-04-01
temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules
Directory of Open Access Journals (Sweden)
Theodore D. Katsilieris
2017-03-01
Full Text Available The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using
Omoniyi, Bayonle; Stow, Dorrik
2016-04-01
One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.
Directory of Open Access Journals (Sweden)
Corina J. Logan
2015-06-01
Full Text Available There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size, as measured by computerized tomography (CT scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex.
Directory of Open Access Journals (Sweden)
Andreas Tuerk
2017-05-01
Full Text Available Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare", a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.
Sousa, Tanara; Lunnen, Jeffrey C; Gonçalves, Veralice; Schmitz, Aurinez; Pasa, Graciela; Bastos, Tamires; Sripad, Pooja; Chandran, Aruna; Pechansky, Flavio
2013-12-01
Drunk driving is an important risk factor for road traffic crashes, injuries and deaths. After June 2008, all drivers in Brazil were subject to a "Zero Tolerance Law" with a set breath alcohol concentration of 0.1 mg/L of air. However, a loophole in this law enabled drivers to refuse breath or blood alcohol testing as it may self-incriminate. The reported prevalence of drunk driving is therefore likely a gross underestimate in many cities. To compare the prevalence of drunk driving gathered from police reports to the prevalence gathered from self-reported questionnaires administered at police sobriety roadblocks in two Brazilian capital cities, and to estimate a more accurate prevalence of drunk driving utilizing three correction techniques based upon information from those questionnaires. In August 2011 and January-February 2012, researchers from the Centre for Drug and Alcohol Research at the Universidade Federal do Rio Grande do Sul administered a roadside interview on drunk driving practices to 805 voluntary participants in the Brazilian capital cities of Palmas and Teresina. Three techniques which include measures such as the number of persons reporting alcohol consumption in the last six hours but who had refused breath testing were used to estimate the prevalence of drunk driving. The prevalence of persons testing positive for alcohol on their breath was 8.8% and 5.0% in Palmas and Teresina respectively. Utilizing a correction technique we calculated that a more accurate prevalence in these sites may be as high as 28.2% and 28.7%. In both cities, about 60% of drivers who self-reported having drank within six hours of being stopped by the police either refused to perform breathalyser testing; fled the sobriety roadblock; or were not offered the test, compared to about 30% of drivers that said they had not been drinking. Despite the reduction of the legal limit for drunk driving stipulated by the "Zero Tolerance Law," loopholes in the legislation permit many
ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models.
Maghrabi, Ali H A; McGuffin, Liam J
2017-07-03
Methods that reliably estimate the likely similarity between the predicted and native structures of proteins have become essential for driving the acceptance and adoption of three-dimensional protein models by life scientists. ModFOLD6 is the latest version of our leading resource for Estimates of Model Accuracy (EMA), which uses a pioneering hybrid quasi-single model approach. The ModFOLD6 server integrates scores from three pure-single model methods and three quasi-single model methods using a neural network to estimate local quality scores. Additionally, the server provides three options for producing global score estimates, depending on the requirements of the user: (i) ModFOLD6_rank, which is optimized for ranking/selection, (ii) ModFOLD6_cor, which is optimized for correlations of predicted and observed scores and (iii) ModFOLD6 global for balanced performance. The ModFOLD6 methods rank among the top few for EMA, according to independent blind testing by the CASP12 assessors. The ModFOLD6 server is also continuously automatically evaluated as part of the CAMEO project, where significant performance gains have been observed compared to our previous server and other publicly available servers. The ModFOLD6 server is freely available at: http://www.reading.ac.uk/bioinf/ModFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Moisey, Lesley L; Mourtzakis, Marina; Kozar, Rosemary A; Compher, Charlene; Heyland, Daren K
2017-12-01
Lean body mass (LBM), quantified using computed tomography (CT), is a significant predictor of clinical outcomes in the critically ill. While CT analysis is precise and accurate in measuring body composition, it may not be practical or readily accessible to all patients in the intensive care unit (ICU). Here, we assessed the agreement between LBM measured by CT and four previously developed equations that predict LBM using variables (i.e. age, sex, weight, height) commonly recorded in the ICU. LBM was calculated in 327 critically ill adults using CT scans, taken at ICU admission, and 4 predictive equations (E1-4) that were derived from non-critically adults since there are no ICU-specific equations. Agreement was assessed using paired t-tests, Pearson's correlation coefficients and Bland-Altman plots. Median LBM calculated by CT was 45 kg (IQR 37-53 kg) and was significantly different (p LBM (error ranged from 7.5 to 9.9 kg), compared with LBM calculated by CT, suggesting insufficient agreement. Our data indicates a large bias is present between the calculation of LBM by CT imaging and the predictive equations that have been compared here. This underscores the need for future research toward the development of ICU-specific equations that reliably estimate LBM in a practical and cost-effective manner. Copyright © 2016 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei
2018-01-01
Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).
Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo
2017-09-01
Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.
International Nuclear Information System (INIS)
Subramanian, Swetha; Mast, T Douglas
2015-01-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. (note)
Subramanian, Swetha; Mast, T Douglas
2015-10-07
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea
2010-01-01
Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amou...
Chen, H.; Chandra, C. V.; Tan, H.; Cifelli, R.; Xie, P.
2016-12-01
Rainfall estimation based on onboard satellite measurements has been an important topic in satellite meteorology for decades. A number of precipitation products at multiple time and space scales have been developed based upon satellite observations. For example, NOAA Climate Prediction Center has developed a morphing technique (i.e., CMORPH) to produce global precipitation products by combining existing space based rainfall estimates. The CMORPH products are essentially derived based on geostationary satellite IR brightness temperature information and retrievals from passive microwave measurements (Joyce et al. 2004). Although the space-based precipitation products provide an excellent tool for regional and global hydrologic and climate studies as well as improved situational awareness for operational forecasts, its accuracy is limited due to the sampling limitations, particularly for extreme events such as very light and/or heavy rain. On the other hand, ground-based radar is more mature science for quantitative precipitation estimation (QPE), especially after the implementation of dual-polarization technique and further enhanced by urban scale radar networks. Therefore, ground radars are often critical for providing local scale rainfall estimation and a "heads-up" for operational forecasters to issue watches and warnings as well as validation of various space measurements and products. The CASA DFW QPE system, which is based on dual-polarization X-band CASA radars and a local S-band WSR-88DP radar, has demonstrated its excellent performance during several years of operation in a variety of precipitation regimes. The real-time CASA DFW QPE products are used extensively for localized hydrometeorological applications such as urban flash flood forecasting. In this paper, a neural network based data fusion mechanism is introduced to improve the satellite-based CMORPH precipitation product by taking into account the ground radar measurements. A deep learning system is
Do group-specific equations provide the best estimates of stature?
Albanese, John; Osley, Stephanie E; Tuck, Andrew
2016-04-01
An estimate of stature can be used by a forensic anthropologist with the preliminary identification of an unknown individual when human skeletal remains are recovered. Fordisc is a computer application that can be used to estimate stature; like many other methods it requires the user to assign an unknown individual to a specific group defined by sex, race/ancestry, and century of birth before an equation is applied. The assumption is that a group-specific equation controls for group differences and should provide the best results most often. In this paper we assess the utility and benefits of using group-specific equations to estimate stature using Fordisc. Using the maximum length of the humerus and the maximum length of the femur from individuals with documented stature, we address the question: Do sex-, race/ancestry- and century-specific stature equations provide the best results when estimating stature? The data for our sample of 19th Century White males (n=28) were entered into Fordisc and stature was estimated using 22 different equation options for a total of 616 trials: 19th and 20th Century Black males, 19th and 20th Century Black females, 19th and 20th Century White females, 19th and 20th Century White males, 19th and 20th Century any, and 20th Century Hispanic males. The equations were assessed for utility in any one case (how many times the estimated range bracketed the documented stature) and in aggregate using 1-way ANOVA and other approaches. This group-specific equation that should have provided the best results was outperformed by several other equations for both the femur and humerus. These results suggest that group-specific equations do not provide better results for estimating stature while at the same time are more difficult to apply because an unknown must be allocated to a given group before stature can be estimated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Abran, Alain
2015-01-01
Software projects are often late and over-budget and this leads to major problems for software customers. Clearly, there is a serious issue in estimating a realistic, software project budget. Furthermore, generic estimation models cannot be trusted to provide credible estimates for projects as complex as software projects. This book presents a number of examples using data collected over the years from various organizations building software. It also presents an overview of the non-for-profit organization, which collects data on software projects, the International Software Benchmarking Stan
Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo
2018-03-01
Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the
Lee, Way Seah; Poo, Muhammad Izzuddin; Nagaraj, Shyamala
2007-12-01
To estimate the cost of an episode of inpatient care and the economic burden of hospitalisation for childhood rotavirus gastroenteritis (GE) in Malaysia. A 12-month prospective, hospital-based study on children less than 14 years of age with rotavirus GE, admitted to University of Malaya Medical Centre, Kuala Lumpur, was conducted in 2002. Data on human resource expenditure, costs of investigations, treatment and consumables were collected. Published estimates on rotavirus disease incidence in Malaysia were searched. Economic burden of hospital care for rotavirus GE in Malaysia was estimated by multiplying the cost of each episode of hospital admission for rotavirus GE with national rotavirus incidence in Malaysia. In 2002, the per capita health expenditure by Malaysian Government was US$71.47. Rotavirus was positive in 85 (22%) of the 393 patients with acute GE admitted during the study period. The median cost of providing inpatient care for an episode of rotavirus GE was US$211.91 (range US$68.50-880.60). The estimated average cases of children hospitalised for rotavirus GE in Malaysia (1999-2000) was 8571 annually. The financial burden of providing inpatient care for rotavirus GE in Malaysian children was estimated to be US$1.8 million (range US$0.6 million-7.5 million) annually. The cost of providing inpatient care for childhood rotavirus GE in Malaysia was estimated to be US$1.8 million annually. The financial burden of rotavirus disease would be higher if cost of outpatient visits, non-medical and societal costs are included.
Directory of Open Access Journals (Sweden)
Po-Jen Hsiao
Full Text Available Estimated glomerular filtration rate (eGFR is used for diagnosis of chronic kidney disease (CKD. The eGFR models based on serum creatinine or cystatin C are used more in clinical practice. Albuminuria and neck circumference are associated with CKD and may have correlations with eGFR.We explored the correlations and modelling formulates among various indicators such as serum creatinine, cystatin C, albuminuria, and neck circumference for eGFR.Cross-sectional study.We reviewed the records of patients with high cardiovascular risk from 2010 to 2011 in Taiwan. 24-hour urine creatinine clearance was used as the standard. We utilized a decision tree to select for variables and adopted a stepwise regression method to generate five models. Model 1 was based on only serum creatinine and was adjusted for age and gender. Model 2 added serum cystatin C, models 3 and 4 added albuminuria and neck circumference, respectively. Model 5 simultaneously added both albuminuria and neck circumference.Total 177 patients were recruited in this study. In model 1, the bias was 2.01 and its precision was 14.04. In model 2, the bias was reduced to 1.86 with a precision of 13.48. The bias of model 3 was 1.49 with a precision of 12.89, and the bias for model 4 was 1.74 with a precision of 12.97. In model 5, the bias could be lower to 1.40 with a precision of 12.53.In this study, the predicting ability of eGFR was improved after the addition of serum cystatin C compared to serum creatinine alone. The bias was more significantly reduced by the calculation of albuminuria. Furthermore, the model generated by combined albuminuria and neck circumference could provide the best eGFR predictions among these five eGFR models. Neck circumference can be investigated potentially in the further studies.
Psychological impact of providing women with personalised 10-year breast cancer risk estimates.
French, David P; Southworth, Jake; Howell, Anthony; Harvie, Michelle; Stavrinos, Paula; Watterson, Donna; Sampson, Sarah; Evans, D Gareth; Donnelly, Louise S
2018-05-08
The Predicting Risk of Cancer at Screening (PROCAS) study estimated 10-year breast cancer risk for 53,596 women attending NHS Breast Screening Programme. The present study, nested within the PROCAS study, aimed to assess the psychological impact of receiving breast cancer risk estimates, based on: (a) the Tyrer-Cuzick (T-C) algorithm including breast density or (b) T-C including breast density plus single-nucleotide polymorphisms (SNPs), versus (c) comparison women awaiting results. A sample of 2138 women from the PROCAS study was stratified by testing groups: T-C only, T-C(+SNPs) and comparison women; and by 10-year risk estimates received: 'moderate' (5-7.99%), 'average' (2-4.99%) or 'below average' (<1.99%) risk. Postal questionnaires were returned by 765 (36%) women. Overall state anxiety and cancer worry were low, and similar for women in T-C only and T-C(+SNPs) groups. Women in both T-C only and T-C(+SNPs) groups showed lower-state anxiety but slightly higher cancer worry than comparison women awaiting results. Risk information had no consistent effects on intentions to change behaviour. Most women were satisfied with information provided. There was considerable variation in understanding. No major harms of providing women with 10-year breast cancer risk estimates were detected. Research to establish the feasibility of risk-stratified breast screening is warranted.
Li, Qingquan; Fang, Zhixiang; Li, Hanwu; Xiao, Hui
2005-10-01
The global positioning system (GPS) has become the most extensively used positioning and navigation tool in the world. Applications of GPS abound in surveying, mapping, transportation, agriculture, military planning, GIS, and the geosciences. However, the positional and elevation accuracy of any given GPS location is prone to error, due to a number of factors. The applications of Global Positioning System (GPS) positioning is more and more popular, especially the intelligent navigation system which relies on GPS and Dead Reckoning technology is developing quickly for future huge market in China. In this paper a practical combined positioning model of GPS/DR/MM is put forward, which integrates GPS, Gyro, Vehicle Speed Sensor (VSS) and digital navigation maps to provide accurate and real-time position for intelligent navigation system. This model is designed for automotive navigation system making use of Kalman filter to improve position and map matching veracity by means of filtering raw GPS and DR signals, and then map-matching technology is used to provide map coordinates for map displaying. In practical examples, for illustrating the validity of the model, several experiments and their results of integrated GPS/DR positioning in intelligent navigation system will be shown for the conclusion that Kalman Filter based GPS/DR integrating position approach is necessary, feasible and efficient for intelligent navigation application. Certainly, this combined positioning model, similar to other model, can not resolve all situation issues. Finally, some suggestions are given for further improving integrated GPS/DR/MM application.
Accurate Evaluation of Quantum Integrals
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Directory of Open Access Journals (Sweden)
Cletah Shoko
2018-04-01
Full Text Available While satellite data has proved to be a powerful tool in estimating C3 and C4 grass species Aboveground Biomass (AGB, finding an appropriate sensor that can accurately characterize the inherent variations remains a challenge. This limitation has hampered the remote sensing community from continuously and precisely monitoring their productivity. This study assessed the potential of a Sentinel 2 MultiSpectral Instrument, Landsat 8 Operational Land Imager, and WorldView-2 sensors, with improved earth imaging characteristics, in estimating C3 and C4 grasses AGB in the Cathedral Peak, South Africa. Overall, all sensors have shown considerable potential in estimating species AGB; with the use of different combinations of the derived spectral bands and vegetation indices producing better accuracies. However, WorldView-2 derived variables yielded better predictive accuracies (R2 ranging between 0.71 and 0.83; RMSEs between 6.92% and 9.84%, followed by Sentinel 2, with R2 between 0.60 and 0.79; and an RMSE 7.66% and 14.66%. Comparatively, Landsat 8 yielded weaker estimates, with R2 ranging between 0.52 and 0.71 and high RMSEs ranging between 9.07% and 19.88%. In addition, spectral bands located within the red edge (e.g., centered at 0.705 and 0.745 µm for Sentinel 2, SWIR, and NIR, as well as the derived indices, were found to be very important in predicting C3 and C4 AGB from the three sensors. The competence of these bands, especially of the free-available Landsat 8 and Sentinel 2 dataset, was also confirmed from the fusion of the datasets. Most importantly, the three sensors managed to capture and show the spatial variations in AGB for the target C3 and C4 grassland area. This work therefore provides a new horizon and a fundamental step towards C3 and C4 grass productivity monitoring for carbon accounting, forage mapping, and modelling the influence of environmental changes on their productivity.
International Nuclear Information System (INIS)
Lloyd, Colin R; Rebelo, Lisa-Maria; Max Finlayson, C
2013-01-01
The conversion of wetlands to agriculture through drainage and flooding, and the burning of wetland areas for agriculture have important implications for greenhouse gas (GHG) production and changing carbon stocks. However, the estimation of net GHG changes from mitigation practices in agricultural wetlands is complex compared to dryland crops. Agricultural wetlands have more complicated carbon and nitrogen cycles with both above- and below-ground processes and export of carbon via vertical and horizontal movement of water through the wetland. This letter reviews current research methodologies in estimating greenhouse gas production and provides guidance on the provision of robust estimates of carbon sequestration and greenhouse gas emissions in agricultural wetlands through the use of low cost reliable and sustainable measurement, modelling and remote sensing applications. The guidance is highly applicable to, and aimed at, wetlands such as those in the tropics and sub-tropics, where complex research infrastructure may not exist, or agricultural wetlands located in remote regions, where frequent visits by monitoring scientists prove difficult. In conclusion, the proposed measurement-modelling approach provides guidance on an affordable solution for mitigation and for investigating the consequences of wetland agricultural practice on GHG production, ecological resilience and possible changes to agricultural yields, variety choice and farming practice. (letter)
Estimating the development assistance for health provided to faith-based organizations, 1990-2013.
Haakenstad, Annie; Johnson, Elizabeth; Graves, Casey; Olivier, Jill; Duff, Jean; Dieleman, Joseph L
2015-01-01
Faith-based organizations (FBOs) have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH) channeled through these organizations. Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs. In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs. Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.
Estimating the development assistance for health provided to faith-based organizations, 1990-2013.
Directory of Open Access Journals (Sweden)
Annie Haakenstad
Full Text Available Faith-based organizations (FBOs have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH channeled through these organizations.Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs.In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs.Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.
Estimating the Development Assistance for Health Provided to Faith-Based Organizations, 1990–2013
Haakenstad, Annie; Johnson, Elizabeth; Graves, Casey; Olivier, Jill; Duff, Jean; Dieleman, Joseph L.
2015-01-01
Background Faith-based organizations (FBOs) have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH) channeled through these organizations. Material and Methods Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs. Results In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund’s contributions to NGOs. In 2011, the Gates Foundation’s contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs. Conclusion Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health. PMID:26042731
Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.
2018-02-01
In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.
Kim, Minsoo; Jung, Na Young; Park, Chang Kyu; Chang, Won Seok; Jung, Hyun Ho; Chang, Jin Woo
2018-06-01
Stereotactic procedures are image guided, often using magnetic resonance (MR) images limited by image distortion, which may influence targets for stereotactic procedures. The aim of this work was to assess methods of identifying target coordinates for stereotactic procedures with MR in multiple phase-encoding directions. In 30 patients undergoing deep brain stimulation, we acquired 5 image sets: stereotactic brain computed tomography (CT), T2-weighted images (T2WI), and T1WI in both right-to-left (RL) and anterior-to-posterior (AP) phase-encoding directions. Using CT coordinates as a reference, we analyzed anterior commissure and posterior commissure coordinates to identify any distortion relating to phase-encoding direction. Compared with CT coordinates, RL-directed images had more positive x-axis values (0.51 mm in T1WI, 0.58 mm in T2WI). AP-directed images had more negative y-axis values (0.44 mm in T1WI, 0.59 mm in T2WI). We adopted 2 methods to predict CT coordinates with MR image sets: parallel translation and selective choice of axes according to phase-encoding direction. Both were equally effective at predicting CT coordinates using only MR; however, the latter may be easier to use in clinical settings. Acquiring MR in multiple phase-encoding directions and selecting axes according to the phase-encoding direction allows identification of more accurate coordinates for stereotactic procedures. © 2018 S. Karger AG, Basel.
International Nuclear Information System (INIS)
Waag, Wladislaw; Sauer, Dirk Uwe
2013-01-01
Highlights: • New adaptive approach for the EMF estimation. • The EMF is estimated by observing the voltage change after the current interruption. • The approach enables an accurate SoC and capacity determination. • Real-time capable algorithm. - Abstract: The online estimation of battery states and parameters is one of the challenging tasks when battery is used as a part of the pure electric or hybrid energy system. For the determination of the available energy stored in the battery, the knowledge of the present state-of-charge (SOC) and capacity of the battery is required. For SOC and capacity determination often the estimation of the battery electromotive force (EMF) is employed. The electromotive force can be measured as an open circuit voltage (OCV) of the battery when a significant time has elapsed since the current interruption. This time may take up to some hours for lithium-ion batteries and is needed to eliminate the influence of the diffusion overvoltages. This paper proposes a new approach to estimate the EMF by considering the OCV relaxation process within only some first minutes after the current interruption. The approach is based on an online fitting of an OCV relaxation model to the measured OCV relaxation curve. This model is based on an equivalent circuit consisting of a voltage source (represents the EMF) in series with the parallel connection of the resistance and a constant phase element (CPE). Based on this fitting the model parameters are determined and the EMF is estimated. The application of this method is exemplarily demonstrated for the state-of-charge and capacity estimation of the lithium-ion battery in an electrical vehicle. In the presented example the battery capacity is determined with the maximal inaccuracy of 2% using the EMF estimated at two different levels of state-of-charge. The real-time capability of the proposed algorithm is proven by its implementation on a low-cost 16-bit microcontroller (Infineon XC2287)
Chughtai, A A; Qadeer, E; Khan, W; Hadi, H; Memon, I A
2013-03-01
To improve involvement of the private sector in the national tuberculosis (TB) programme in Pakistan various public-private mix projects were set up between 2004 and 2009. A retrospective analysis of data was made to study 6 different public-private mix models for TB control in Pakistan and estimate the contribution of the various private providers to TB case notification and treatment outcome. The number of TB cases notified through the private sector increased significantly from 77 cases in 2004 to 37,656 in 2009. Among the models, the nongovernmental organization model made the greatest contribution to case notification (58.3%), followed by the hospital-based model (18.9%). Treatment success was highest for the district-led model (94.1%) and lowest for the hospital-based model (74.2%). The private sector made an important contribution to the national data through the various public-private mix projects. Issues of sustainability and the lack of treatment supporters are discussed as reasons for lack of success of some projects.
Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea
2010-06-01
Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amounts of ten commonly consumed foods (breakfast cereals, French fries, pasta, rice, apple sauce, carrots and peas, crisps, creamy velouté, red cabbage, and peas). Two procedures were followed: (1) short-term recall: adolescents (n 73) self-served their usual portions of the ten foods and estimated the amounts later the same day; (2) real-time perception: adolescents (n 128) estimated two sets (different portions) of pre-weighed portions displayed near the computer. Self-served portions were, on average, 8 % underestimated; significant underestimates were found for breakfast cereals, French fries, peas, and carrots and peas. Spearman's correlations between the self-served and estimated weights varied between 0.51 and 0.84, with an average of 0.72. The kappa statistics were moderate (>0.4) for all but one item. Pre-weighed portions were, on average, 15 % underestimated, with significant underestimates for fourteen of the twenty portions. Photographs of food items can serve as a good aid in ranking subjects; however, to assess the actual intake at a group level, underestimation must be considered.
A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation
Kim, Ji Chul
2017-01-01
Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework. PMID:28522983
A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.
Kim, Ji Chul
2017-01-01
Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.
A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation
Directory of Open Access Journals (Sweden)
Ji Chul Kim
2017-05-01
Full Text Available Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.
Darvesh, Nazia; Das, Jai K; Vaivada, Tyler; Gaffey, Michelle F; Rasanathan, Kumanan; Bhutta, Zulfiqar A
2017-11-07
In the Sustainable Development Goals (SDGs) era, there is growing recognition of the responsibilities of non-health sectors in improving the health of children. Interventions to improve access to clean water, sanitation facilities, and hygiene behaviours (WASH) represent key opportunities to improve child health and well-being by preventing the spread of infectious diseases and improving nutritional status. We conducted a systematic review of studies evaluating the effects of WASH interventions on childhood diarrhea in children 0-5 years old. Searches were run up to September 2016. We screened the titles and abstracts of retrieved articles, followed by screening of the full-text reports of relevant studies. We abstracted study characteristics and quantitative data, and assessed study quality. Meta-analyses were performed for similar intervention and outcome pairs. Pooled analyses showed diarrhea risk reductions from the following interventions: point-of-use water filtration (pooled risk ratio (RR): 0.47, 95% confidence interval (CI): 0.36-0.62), point-of-use water disinfection (pooled RR: 0.69, 95% CI: 0.60-0.79), and hygiene education with soap provision (pooled RR: 0.73, 95% CI: 0.57-0.94). Quality ratings were low or very low for most studies, and heterogeneity was high in pooled analyses. Improvements to the water supply and water disinfection at source did not show significant effects on diarrhea risk, nor did the one eligible study examining the effect of latrine construction. Various WASH interventions show diarrhea risk reductions between 27% and 53% in children 0-5 years old, depending on intervention type, providing ample evidence to support the scale-up of WASH in low and middle-income countries (LMICs). Due to the overall low quality of the evidence and high heterogeneity, further research is required to accurately estimate the magnitude of the effects of these interventions in different contexts.
Leon, Piera; Rivellini, Roberta; Giudici, Fabiola; Sciuto, Antonio; Pirozzi, Felice; Corcione, Francesco
2017-04-01
The aim of this study is to evaluate if 3-dimensional high-definition (3D) vision in laparoscopy can prompt advantages over conventional 2D high-definition vision in hiatal hernia (HH) repair. Between September 2012 and September 2015, we randomized 36 patients affected by symptomatic HH to undergo surgery; 17 patients underwent 2D laparoscopic HH repair, whereas 19 patients underwent the same operation in 3D vision. No conversion to open surgery occurred. Overall operative time was significantly reduced in the 3D laparoscopic group compared with the 2D one (69.9 vs 90.1 minutes, P = .006). Operative time to perform laparoscopic crura closure did not differ significantly between the 2 groups. We observed a tendency to a faster crura closure in the 3D group in the subgroup of patients with mesh positioning (7.5 vs 8.9 minutes, P = .09). Nissen fundoplication was faster in the 3D group without mesh positioning ( P = .07). 3D vision in laparoscopic HH repair helps surgeon's visualization and seems to lead to operative time reduction. Advantages can result from the enhanced spatial perception of narrow spaces. Less operative time and more accurate surgery translate to benefit for patients and cost savings, compensating the high costs of the 3D technology. However, more data from larger series are needed to firmly assess the advantages of 3D over 2D vision in laparoscopic HH repair.
Energy Technology Data Exchange (ETDEWEB)
Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)
2007-10-15
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
International Nuclear Information System (INIS)
Rybynok, V O; Kyriacou, P A
2007-01-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media
Ben Issaid, Chaouki; Park, Kihong; Alouini, Mohamed-Slim
2017-01-01
When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.
Ben Issaid, Chaouki
2017-07-28
When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.
Directory of Open Access Journals (Sweden)
Ariel E. Marcy
2018-06-01
Full Text Available Background Advances in 3D shape capture technology have made powerful shape analyses, such as geometric morphometrics, more feasible. While the highly accurate micro-computed tomography (µCT scanners have been the “gold standard,” recent improvements in 3D surface scanners may make this technology a faster, portable, and cost-effective alternative. Several studies have already compared the two devices but all use relatively large specimens such as human crania. Here we perform shape analyses on Australia’s smallest rodent to test whether a 3D scanner produces similar results to a µCT scanner. Methods We captured 19 delicate mouse (Pseudomys delicatulus crania with a µCT scanner and a 3D scanner for geometric morphometrics. We ran multiple Procrustes ANOVAs to test how variation due to scan device compared to other sources such as biologically relevant variation and operator error. We quantified operator error as levels of variation and repeatability. Further, we tested if the two devices performed differently at classifying individuals based on sexual dimorphism. Finally, we inspected scatterplots of principal component analysis (PCA scores for non-random patterns. Results In all Procrustes ANOVAs, regardless of factors included, differences between individuals contributed the most to total variation. The PCA plots reflect this in how the individuals are dispersed. Including only the symmetric component of shape increased the biological signal relative to variation due to device and due to error. 3D scans showed a higher level of operator error as evidenced by a greater spread of their replicates on the PCA, a higher level of multivariate variation, and a lower repeatability score. However, the 3D scan and µCT scan datasets performed identically in classifying individuals based on intra-specific patterns of sexual dimorphism. Discussion Compared to µCT scans, we find that even low resolution 3D scans of very small specimens are
Manigart, Olivier; Boeras, Debrah I; Karita, Etienne; Hawkins, Paulina A; Vwalika, Cheswa; Makombe, Nathan; Mulenga, Joseph; Derdeyn, Cynthia A; Allen, Susan; Hunter, Eric
2012-12-01
A critical step in HIV-1 transmission studies is the rapid and accurate identification of epidemiologically linked transmission pairs. To date, this has been accomplished by comparison of polymerase chain reaction (PCR)-amplified nucleotide sequences from potential transmission pairs, which can be cost-prohibitive for use in resource-limited settings. Here we describe a rapid, cost-effective approach to determine transmission linkage based on the heteroduplex mobility assay (HMA), and validate this approach by comparison to nucleotide sequencing. A total of 102 HIV-1-infected Zambian and Rwandan couples, with known linkage, were analyzed by gp41-HMA. A 400-base pair fragment within the envelope gp41 region of the HIV proviral genome was PCR amplified and HMA was applied to both partners' amplicons separately (autologous) and as a mixture (heterologous). If the diversity between gp41 sequences was low (<5%), a homoduplex was observed upon gel electrophoresis and the transmission was characterized as having occurred between partners (linked). If a new heteroduplex formed, within the heterologous migration, the transmission was determined to be unlinked. Initial blind validation of gp-41 HMA demonstrated 90% concordance between HMA and sequencing with 100% concordance in the case of linked transmissions. Following validation, 25 newly infected partners in Kigali and 12 in Lusaka were evaluated prospectively using both HMA and nucleotide sequences. Concordant results were obtained in all but one case (97.3%). The gp41-HMA technique is a reliable and feasible tool to detect linked transmissions in the field. All identified unlinked results should be confirmed by sequence analyses.
Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan. Huang
2015-01-01
We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...
Khader, A. I.; Rosenberg, D. E.; McKee, M.
2013-05-01
Groundwater contaminated with nitrate poses a serious health risk to infants when this contaminated water is used for culinary purposes. To avoid this health risk, people need to know whether their culinary water is contaminated or not. Therefore, there is a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management options. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI) provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision-maker and the expected outcomes from these alternatives. The alternatives include (i) ignore the health risk of nitrate-contaminated water, (ii) switch to alternative water sources such as bottled water, or (iii) implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, contaminant transport processes, and climate (Khader, 2012). The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine, where methemoglobinemia (blue baby syndrome) is the main health problem associated with the principal contaminant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods) associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not use aquifer water, and whether people get sick from drinking contaminated water. Outcome costs
Nakano, Kikuo; Kitahara, Yoshihiro; Mito, Mineyo; Seno, Misato; Sunada, Shoji
2018-02-27
Without explicit prognostic information, patients may overestimate their life expectancy and make poor choices at the end of life. We sought to design the Japanese version of an information aid (IA) to provide accurate information on prognosis to patients with advanced non-small-cell lung cancer (NSCLC) and to assess the effects of the IA on hope, psychosocial status, and perception of curability. We developed the Japanese version of an IA, which provided information on survival and cure rates as well as numerical survival estimates for patients with metastatic NSCLC receiving first-line chemotherapy. We then assessed the pre- and post-intervention effects of the IA on hope, anxiety, and perception of curability and treatment benefits. A total of 20 (95%) of 21 patients (65% male; median age, 72 years) completed the IA pilot test. Based on the results, scores on the Distress and Impact Thermometer screening tool for adjustment disorders and major depression tended to decrease (from 4.5 to 2.5; P = 0.204), whereas no significant changes were seen in scores for anxiety on the Japanese version of the Support Team Assessment Schedule or in scores on the Hearth Hope Index (from 41.9 to 41.5; p = 0.204). The majority of the patients (16/20, 80%) had high expectations regarding the curative effects of chemotherapy. The Japanese version of the IA appeared to help patients with NSCLC maintain hope, and did not increase their anxiety when they were given explicit prognostic information; however, the IA did not appear to help such patients understand the goal of chemotherapy. Further research is needed to test the findings in a larger sample and measure the outcomes of explicit prognostic information on hope, psychological status, and perception of curability.
International Nuclear Information System (INIS)
Belge, Benedicte; Pasquet, Agnes; Vanoverschelde, Jean-Louis J.; Coche, Emmanuel; Gerber, Bernhard L.
2006-01-01
Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134±51 and 67±56 ml) were similar to those by MR (137±57 and 70±60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55±21 vs. 56±21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3±1.8 vs. 8.8±1.9 mm and 12.7±3.4 vs. 13.3±3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54±30 vs. 51±31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR. (orig.)
Estimating the cost of skin cancer detection by dermatology providers in a large health care system.
Matsumoto, Martha; Secrest, Aaron; Anderson, Alyce; Saul, Melissa I; Ho, Jonhan; Kirkwood, John M; Ferris, Laura K
2018-04-01
Data on the cost and efficiency of skin cancer detection through total body skin examination are scarce. To determine the number needed to screen (NNS) and biopsy (NNB) and cost per skin cancer diagnosed in a large dermatology practice in patients undergoing total body skin examination. This is a retrospective observational study. During 2011-2015, a total of 20,270 patients underwent 33,647 visits for total body skin examination; 9956 lesion biopsies were performed yielding 2763 skin cancers, including 155 melanomas. The NNS to detect 1 skin cancer was 12.2 (95% confidence interval [CI] 11.7-12.6) and 1 melanoma was 215 (95% CI 185-252). The NNB to detect 1 skin cancer was 3.0 (95% CI 2.9-3.1) and 1 melanoma was 27.8 (95% CI 23.3-33.3). In a multivariable model for NNS, age and personal history of melanoma were significant factors. Age switched from a protective factor to a risk factor at 51 years of age. The estimated cost per melanoma detected was $32,594 (95% CI $27,326-$37,475). Data are from a single health care system and based on physician coding. Melanoma detection through total body skin examination is most efficient in patients ≥50 years of age and those with a personal history of melanoma. Our findings will be helpful in modeling the cost effectiveness of melanoma screening by dermatologists. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
Estimated Nutritive Value of Low-Price Model Lunch Sets Provided to Garment Workers in Cambodia
Directory of Open Access Journals (Sweden)
Jan Makurat
2017-07-01
Full Text Available Background: The establishment of staff canteens is expected to improve the nutritional situation of Cambodian garment workers. The objective of this study is to assess the nutritive value of low-price model lunch sets provided at a garment factory in Phnom Penh, Cambodia. Methods: Exemplary lunch sets were served to female workers through a temporary canteen at a garment factory in Phnom Penh. Dish samples were collected repeatedly to examine mean serving sizes of individual ingredients. Food composition tables and NutriSurvey software were used to assess mean amounts and contributions to recommended dietary allowances (RDAs or adequate intake of energy, macronutrients, dietary fiber, vitamin C (VitC, iron, vitamin A (VitA, folate and vitamin B12 (VitB12. Results: On average, lunch sets provided roughly one third of RDA or adequate intake of energy, carbohydrates, fat and dietary fiber. Contribution to RDA of protein was high (46% RDA. The sets contained a high mean share of VitC (159% RDA, VitA (66% RDA, and folate (44% RDA, but were low in VitB12 (29% RDA and iron (20% RDA. Conclusions: Overall, lunches satisfied recommendations of caloric content and macronutrient composition. Sets on average contained a beneficial amount of VitC, VitA and folate. Adjustments are needed for a higher iron content. Alternative iron-rich foods are expected to be better suited, compared to increasing portions of costly meat/fish components. Lunch provision at Cambodian garment factories holds the potential to improve food security of workers, approximately at costs of <1 USD/person/day at large scale. Data on quantitative total dietary intake as well as physical activity among workers are needed to further optimize the concept of staff canteens.
Guy, S Z Y; Li, L; Thomson, P C; Hermesch, S
2017-12-01
Environmental descriptors derived from mean performances of contemporary groups (CGs) are assumed to capture any known and unknown environmental challenges. The objective of this paper was to obtain a finer definition of the unknown challenges, by adjusting CG estimates for the known climatic effects of monthly maximum air temperature (MaxT), minimum air temperature (MinT) and monthly rainfall (Rain). As the unknown component could include infection challenges, these refined descriptors may help to better model varying responses of sire progeny to environmental infection challenges for the definition of disease resilience. Data were recorded from 1999 to 2013 at a piggery in south-east Queensland, Australia (n = 31,230). Firstly, CG estimates of average daily gain (ADG) and backfat (BF) were adjusted for MaxT, MinT and Rain, which were fitted as splines. In the models used to derive CG estimates for ADG, MaxT and MinT were significant variables. The models that contained these significant climatic variables had CG estimates with a lower variance compared to models without significant climatic variables. Variance component estimates were similar across all models, suggesting that these significant climatic variables accounted for some known environmental variation captured in CG estimates. No climatic variables were significant in the models used to derive the CG estimates for BF. These CG estimates were used to categorize environments. There was no observable sire by environment interaction (Sire×E) for ADG when using the environmental descriptors based on CG estimates on BF. For the environmental descriptors based on CG estimates of ADG, there was significant Sire×E only when MinT was included in the model (p = .01). Therefore, this new definition of the environment, preadjusted by MinT, increased the ability to detect Sire×E. While the unknown challenges captured in refined CG estimates need verification for infection challenges, this may provide a
Kamphuis, Claudia; Dela Rue, B.; Turner, S.A.; Petch, S.
2015-01-01
Information on accuracy of milk-sampling devices used on farms with automated milking systems (AMS) is essential for development of milk recording protocols. The hypotheses of this study were (1) devices used by AMS units are similarly accurate in estimating milk yield and in collecting
Directory of Open Access Journals (Sweden)
Arun Ravindran
2012-02-01
Full Text Available Despite the promising performance improvement observed in emerging many-core architectures in high performance processors, high power consumption prohibitively affects their use and marketability in the low-energy sectors, such as embedded processors, network processors and application specific instruction processors (ASIPs. While most chip architects design power-efficient processors by finding an optimal power-performance balance in their design, some use sophisticated on-chip autonomous power management units, which dynamically reduce the voltage or frequencies of idle cores and hence extend battery life and reduce operating costs. For large scale designs of many-core processors, a holistic approach integrating both these techniques at different levels of abstraction can potentially achieve maximal power savings. In this paper we present CASPER, a robust instruction trace driven cycle-accurate many-core multi-threading micro-architecture simulation platform where we have incorporated power estimation models of a wide variety of tunable many-core micro-architectural design parameters, thus enabling processor architects to explore a sufficiently large design space and achieve power-efficient designs. Additionally CASPER is designed to accommodate cycle-accurate models of hardware controlled power management units, enabling architects to experiment with and evaluate different autonomous power-saving mechanisms to study the run-time power-performance trade-offs in embedded many-core processors. We have implemented two such techniques in CASPER–Chipwide Dynamic Voltage and Frequency Scaling, and Performance Aware Core-Specific Frequency Scaling, which show average power savings of 35.9% and 26.2% on a baseline 4-core SPARC based architecture respectively. This power saving data accounts for the power consumption of the power management units themselves. The CASPER simulation platform also provides users with complete support of SPARCV9
Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of
Energy Technology Data Exchange (ETDEWEB)
Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)
2014-07-15
previously published pediatric patient doses that accounted for patient size in their dose calculation, and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusions: For organs fully covered within the scan volume, the average correlation of SSDE and organ absolute dose was found to be better than ±10%. In addition, this study provides a complete list of organ dose correlation factors (CF{sub SSDE}{sup organ}) for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.
Wang, Jian; Shete, Sanjay
2011-11-01
We recently proposed a bias correction approach to evaluate accurate estimation of the odds ratio (OR) of genetic variants associated with a secondary phenotype, in which the secondary phenotype is associated with the primary disease, based on the original case-control data collected for the purpose of studying the primary disease. As reported in this communication, we further investigated the type I error probabilities and powers of the proposed approach, and compared the results to those obtained from logistic regression analysis (with or without adjustment for the primary disease status). We performed a simulation study based on a frequency-matching case-control study with respect to the secondary phenotype of interest. We examined the empirical distribution of the natural logarithm of the corrected OR obtained from the bias correction approach and found it to be normally distributed under the null hypothesis. On the basis of the simulation study results, we found that the logistic regression approaches that adjust or do not adjust for the primary disease status had low power for detecting secondary phenotype associated variants and highly inflated type I error probabilities, whereas our approach was more powerful for identifying the SNP-secondary phenotype associations and had better-controlled type I error probabilities. © 2011 Wiley Periodicals, Inc.
Viers, J. H.
2013-12-01
Integrating citizen scientists into ecological informatics research can be difficult due to limited opportunities for meaningful engagement given vast data streams. This is particularly true for analysis of remotely sensed data, which are increasingly being used to quantify ecosystem services over space and time, and to understand how land uses deliver differing values to humans and thus inform choices about future human actions. Carbon storage and sequestration are such ecosystem services, and recent environmental policy advances in California (i.e., AB 32) have resulted in a nascent carbon market that is helping fuel the restoration of riparian forests in agricultural landscapes. Methods to inventory and monitor aboveground carbon for market accounting are increasingly relying on hyperspatial remotely sensed data, particularly the use of light detection and ranging (LiDAR) technologies, to estimate biomass. Because airborne discrete return LiDAR can inexpensively capture vegetation structural differences at high spatial resolution ( 1000 ha), its use is rapidly increasing, resulting in vast stores of point cloud and derived surface raster data. While established algorithms can quantify forest canopy structure efficiently, the highly complex nature of native riparian forests can result in highly uncertain estimates of biomass due to differences in composition (e.g., species richness, age class) and structure (e.g., stem density). This study presents the comparative results of standing carbon estimates refined with field data collected by citizen scientists at three different sites, each capturing a range of agricultural, remnant forest, and restored forest cover types. These citizen science data resolve uncertainty in composition and structure, and improve allometric scaling models of biomass and thus estimates of aboveground carbon. Results indicate that agricultural land and horticulturally restored riparian forests store similar amounts of aboveground carbon
Warne, Russell T.
2016-01-01
Recently Kim (2016) published a meta-analysis on the effects of enrichment programs for gifted students. She found that these programs produced substantial effects for academic achievement (g = 0.96) and socioemotional outcomes (g = 0.55). However, given current theory and empirical research these estimates of the benefits of enrichment programs…
Engel, Fabian; Farrell, Kaitlin J.; McCullough, Ian M.; Scordo, Facundo; Denfeld, Blaize A.; Dugan, Hilary A.; de Eyto, Elvira; Hanson, Paul C.; McClure, Ryan P.; Nõges, Peeter; Nõges, Tiina; Ryder, Elizabeth; Weathers, Kathleen C.; Weyhenmeyer, Gesa A.
2018-04-01
The magnitude of lateral dissolved inorganic carbon (DIC) export from terrestrial ecosystems to inland waters strongly influences the estimate of the global terrestrial carbon dioxide (CO2) sink. At present, no reliable number of this export is available, and the few studies estimating the lateral DIC export assume that all lakes on Earth function similarly. However, lakes can function along a continuum from passive carbon transporters (passive open channels) to highly active carbon transformers with efficient in-lake CO2 production and loss. We developed and applied a conceptual model to demonstrate how the assumed function of lakes in carbon cycling can affect calculations of the global lateral DIC export from terrestrial ecosystems to inland waters. Using global data on in-lake CO2 production by mineralization as well as CO2 loss by emission, primary production, and carbonate precipitation in lakes, we estimated that the global lateral DIC export can lie within the range of {0.70}_{-0.31}^{+0.27} to {1.52}_{-0.90}^{+1.09} Pg C yr-1 depending on the assumed function of lakes. Thus, the considered lake function has a large effect on the calculated lateral DIC export from terrestrial ecosystems to inland waters. We conclude that more robust estimates of CO2 sinks and sources will require the classification of lakes into their predominant function. This functional lake classification concept becomes particularly important for the estimation of future CO2 sinks and sources, since in-lake carbon transformation is predicted to be altered with climate change.
Bilotta, Federico; Titi, Luca; Lanni, Fabiana; Stazi, Elisabetta; Rosa, Giovanni
2013-08-01
To measure the learning curves of residents in anesthesiology in providing anesthesia for awake craniotomy, and to estimate the case load needed to achieve a "good-excellent" level of competence. Prospective study. Operating room of a university hospital. 7 volunteer residents in anesthesiology. Residents underwent a dedicated training program of clinical characteristics of anesthesia for awake craniotomy. The program was divided into three tasks: local anesthesia, sedation-analgesia, and intraoperative hemodynamic management. The learning curve for each resident for each task was recorded over 10 procedures. Quantitative assessment of the individual's ability was based on the resident's self-assessment score and the attending anesthesiologist's judgment, and rated by modified 12 mm Likert scale, reported ability score visual analog scale (VAS). This ability VAS score ranged from 1 to 12 (ie, very poor, mild, moderate, sufficient, good, excellent). The number of requests for advice also was recorded (ie, resident requests for practical help and theoretical notions to accomplish the procedures). Each task had a specific learning rate; the number of procedures necessary to achieve "good-excellent" ability with confidence, as determined by the recorded results, were 10 procedures for local anesthesia, 15 to 25 procedures for sedation-analgesia, and 20 to 30 procedures for intraoperative hemodynamic management. Awake craniotomy is an approach used increasingly in neuroanesthesia. A dedicated training program based on learning specific tasks and building confidence with essential features provides "good-excellent" ability. © 2013 Elsevier Inc. All rights reserved.
Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M
2016-08-01
Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. Copyright © 2016. Published by Elsevier Ltd.
International Nuclear Information System (INIS)
Khattab, K.
2007-01-01
The modified 135 Xe equilibrium reactivity in the Syrian Miniature Neutron Source Reactor (MNSR) was calculated first by using the WIMSD4 and CITATION codes to estimate the four-factor product (ε p P f nl P t nl). Then, precise calculations of 135 Xe and 149 Sm concentrations and reactivities were carried out and compared during the reactor operation time and after shutdown. It was found that the 135 Xe and 149 Sm reactivities did not reach their equilibrium reactivities during the daily operating time of the reactor. The 149 Sm reactivities could be neglected compared to 135 Xe reactivities during the reactor operating time and after shutdown. (author)
International Nuclear Information System (INIS)
Khattab, K.
2007-01-01
The modified 135 Xe equilibrium reactivity in the Syrian Miniature Neutron Source Reactor (MNSR) was calculated first by using the WIMSD4 and CITATION codes to estimate the four-factor product (ε p P fnl P tnl ). Then, precise calculations of 135 Xe and 149 Sm concentrations and reactivities were carried out and compared during the reactor operation time and after shutdown. It was found that the 135 Xe and 149 Sm reactivities did not reach their equilibrium reactivities during the daily operating time of the reactor. The 149 Sm reactivities could be neglected compared to 135 Xe reactivities during the reactor operating time and after shutdown. (author)
Settumba, Stella Nalukwago; Sweeney, Sedona; Seeley, Janet; Biraro, Samuel; Mutungi, Gerald; Munderi, Paula; Grosskurth, Heiner; Vassall, Anna
2015-06-01
To explore the chronic disease services in Uganda: their level of utilisation, the total service costs and unit costs per visit. Full financial and economic cost data were collected from 12 facilities in two districts, from the provider's perspective. A combination of ingredients-based and step-down allocation costing approaches was used. The diseases under study were diabetes, hypertension, chronic obstructive pulmonary disease (COPD), epilepsy and HIV infection. Data were collected through a review of facility records, direct observation and structured interviews with health workers. Provision of chronic care services was concentrated at higher-level facilities. Excluding drugs, the total costs for NCD care fell below 2% of total facility costs. Unit costs per visit varied widely, both across different levels of the health system, and between facilities of the same level. This variability was driven by differences in clinical and drug prescribing practices. Most patients reported directly to higher-level facilities, bypassing nearby peripheral facilities. NCD services in Uganda are underfunded particularly at peripheral facilities. There is a need to estimate the budget impact of improving NCD care and to standardise treatment guidelines. © 2015 The Authors. Tropical Medicine & International Health Published by John Wiley & Sons Ltd.
Wycherley, Thomas; Ferguson, Megan; O'Dea, Kerin; McMahon, Emma; Liberato, Selma; Brimblecombe, Julie
2016-12-01
Determine how very-remote Indigenous community (RIC) food and beverage (F&B) turnover quantities and associated dietary intake estimates derived from only stores, compare with values derived from all community F&B providers. F&B turnover quantity and associated dietary intake estimates (energy, micro/macronutrients and major contributing food types) were derived from 12-months transaction data of all F&B providers in three RICs (NT, Australia). F&B turnover quantities and dietary intake estimates from only stores (plus only the primary store in multiple-store communities) were expressed as a proportion of complete F&B provider turnover values. Food types and macronutrient distribution (%E) estimates were quantitatively compared. Combined stores F&B turnover accounted for the majority of F&B quantity (98.1%) and absolute dietary intake estimates (energy [97.8%], macronutrients [≥96.7%] and micronutrients [≥83.8%]). Macronutrient distribution estimates from combined stores and only the primary store closely aligned complete provider estimates (≤0.9% absolute). Food types were similar using combined stores, primary store or complete provider turnover. Evaluating combined stores F&B turnover represents an efficient method to estimate total F&B turnover quantity and associated dietary intake in RICs. In multiple-store communities, evaluating only primary store F&B turnover provides an efficient estimate of macronutrient distribution and major food types. © 2016 Public Health Association of Australia.
Oosterveld, Michiel J. S.; Gemke, Reinoud J. B. J.; Dainty, Jack R.; Kulik, Willem; Jakobs, Cornelis; de Meer, Kees
2005-01-01
An oral [13C]urea protocol may provide a simple method for measurement of urea production. The validity of single pool calculations in relation to a reduced sampling protocol was assessed. In eight fed and five fasted piglets, plasma urea enrichments from a 10 h sampling protocol were measured
International Nuclear Information System (INIS)
Lear, J.L.; Feyerabend, A.; Gregory, C.
1989-01-01
Discordance between effective renal plasma flow (ERPF) measurements from radionuclide techniques that use single versus multiple plasma samples was investigated. In particular, the authors determined whether effects of variations in distribution volume (Vd) of iodine-131 iodohippurate on measurement of ERPF could be ignored, an assumption implicit in the single-sample technique. The influence of Vd on ERPF was found to be significant, a factor indicating an important and previously unappreciated source of error in the single-sample technique. Therefore, a new two-compartment, two-plasma-sample technique was developed on the basis of the observations that while variations in Vd occur from patient to patient, the relationship between intravascular and extravascular components of Vd and the rate of iodohippurate exchange between the components are stable throughout a wide range of physiologic and pathologic conditions. The new technique was applied in a series of 30 studies in 19 patients. Results were compared with those achieved with the reference, single-sample, and slope-intercept techniques. The new two-compartment, two-sample technique yielded estimates of ERPF that more closely agreed with the reference multiple-sample method than either the single-sample or slope-intercept techniques
Matsui, Toru; Baba, Takeshi; Kamiya, Katsumasa; Shigeta, Yasuteru
2012-03-28
We report a scheme for estimating the acid dissociation constant (pK(a)) based on quantum-chemical calculations combined with a polarizable continuum model, where a parameter is determined for small reference molecules. We calculated the pK(a) values of variously sized molecules ranging from an amino acid to a protein consisting of 300 atoms. This scheme enabled us to derive a semiquantitative pK(a) value of specific chemical groups and discuss the influence of the surroundings on the pK(a) values. As applications, we have derived the pK(a) value of the side chain of an amino acid and almost reproduced the experimental value. By using our computing schemes, we showed the influence of hydrogen bonds on the pK(a) values in the case of tripeptides, which decreases the pK(a) value by 3.0 units for serine in comparison with those of the corresponding monopeptides. Finally, with some assumptions, we derived the pK(a) values of tyrosines and serines in chignolin and a tryptophan cage. We obtained quite different pK(a) values of adjacent serines in the tryptophan cage; the pK(a) value of the OH group of Ser13 exposed to bulk water is 14.69, whereas that of Ser14 not exposed to bulk water is 20.80 because of the internal hydrogen bonds.
Eftekhari, Behzad; Marder, M.; Patzek, Tadeusz
2018-01-01
the external unstimulated reservoir. This allows us to estimate for the first time the effective permeability of the unstimulated shale and the spacing of fractures in the stimulated region. From an analysis of wells in the Barnett shale, we find
International Nuclear Information System (INIS)
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-01-01
Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear–quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18–30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8–30.9 Gy) and 22.0 Gy (range, 20.2–26.6 Gy), respectively. By use of conventional values for α/β, volume parameter n, 50% complication probability dose TD 50 , and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of α/β and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of α/β and n yielded better predictions (0.7 complications), with n = 0.023 and α/β = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high α/β value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models traditionally used to estimate spinal cord NTCP
Directory of Open Access Journals (Sweden)
Chow Clara
2011-08-01
a cause of death did not substantively influence the pattern of mortality estimated. Substantially abbreviated and simplified verbal autopsy questionnaires might provide robust information about high-level mortality patterns.
Pedler, Ashley; Kamper, Steven J; Sterling, Michele
2016-08-01
The fear avoidance model (FAM) has been proposed to explain the development of chronic disability in a variety of conditions including whiplash-associated disorders (WADs). The FAM does not account for symptoms of posttraumatic stress and sensory hypersensitivity, which are associated with poor recovery from whiplash injury. The aim of this study was to explore a model for the maintenance of pain and related disability in people with WAD including symptoms of PTSD, sensory hypersensitivity, and FAM components. The relationship between individual components in the model and disability and how these relationships changed over the first 12 weeks after injury were investigated. We performed a longitudinal study of 103 (74 female) patients with WAD. Measures of pain intensity, cold and mechanical pain thresholds, symptoms of posttraumatic stress, pain catastrophising, kinesiophobia, and fear of cervical spine movement were collected within 6 weeks of injury and at 12 weeks after injury. Mixed-model analysis using Neck Disability Index (NDI) scores and average 24-hour pain intensity as the dependent variables revealed that overall model fit was greatest when measures of fear of movement, posttraumatic stress, and sensory hypersensitivity were included. The interactive effects of time with catastrophising and time with fear of activity of the cervical spine were also included in the best model for disability. These results provide preliminary support for the addition of neurobiological and stress system components to the FAM to explain poor outcome in patients with WAD.
Eftekhari, Behzad
2018-05-23
About half of US natural gas comes from gas shales. It is valuable to study field production well by well. We present a field data-driven solution for long-term shale gas production from a horizontal, hydrofractured well far from other wells and reservoir boundaries. Our approach is a hybrid between an unstructured big-data approach and physics-based models. We extend a previous two-parameter scaling theory of shale gas production by adding a third parameter that incorporates gas inflow from the external unstimulated reservoir. This allows us to estimate for the first time the effective permeability of the unstimulated shale and the spacing of fractures in the stimulated region. From an analysis of wells in the Barnett shale, we find that on average stimulation fractures are spaced every 20 m, and the effective permeability of the unstimulated region is 100 nanodarcy. We estimate that over 30 years on production the Barnett wells will produce on average about 20% more gas because of inflow from the outside of the stimulated volume. There is a clear tradeoff between production rate and ultimate recovery in shale gas development. In particular, our work has strong implications for well spacing in infill drilling programs.
Directory of Open Access Journals (Sweden)
David H Williamson
Full Text Available No-take marine reserves (NTMRs are increasingly being established to conserve or restore biodiversity and to enhance the sustainability of fisheries. Although effectively designed and protected NTMR networks can yield conservation and fishery benefits, reserve effects often fail to manifest in systems where there are high levels of non-compliance by fishers (poaching. Obtaining reliable estimates of NTMR non-compliance can be expensive and logistically challenging, particularly in areas with limited or non-existent resources for conducting surveillance and enforcement. Here we assess the utility of density estimates and re-accumulation rates of derelict (lost and abandoned fishing line as a proxy for fishing effort and NTMR non-compliance on fringing coral reefs in three island groups of the Great Barrier Reef Marine Park (GBRMP, Australia. Densities of derelict fishing line were consistently lower on reefs within old (>20 year NTMRs than on non-NTMR reefs (significantly in the Palm and Whitsunday Islands, whereas line densities did not differ significantly between reefs in new NTMRs (5 years of protection and non-NTMR reefs. A manipulative experiment in which derelict fishing lines were removed from a subset of the monitoring sites demonstrated that lines re-accumulated on NTMR reefs at approximately one third (32.4% of the rate observed on non-NTMR reefs over a thirty-two month period. Although these inshore NTMRs have long been considered some of the best protected within the GBRMP, evidence presented here suggests that the level of non-compliance with NTMR regulations is higher than previously assumed.
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
Accurate quantum chemical calculations
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Bartlett, D L; Ezzati-Rice, T M; Stokley, S; Zhao, Z
2001-05-01
The National Immunization Survey (NIS) and the National Health Interview Survey (NHIS) produce national coverage estimates for children aged 19 months to 35 months. The NIS is a cost-effective, random-digit-dialing telephone survey that produces national and state-level vaccination coverage estimates. The National Immunization Provider Record Check Study (NIPRCS) is conducted in conjunction with the annual NHIS, which is a face-to-face household survey. As the NIS is a telephone survey, potential coverage bias exists as the survey excludes children living in nontelephone households. To assess the validity of estimates of vaccine coverage from the NIS, we compared 1995 and 1996 NIS national estimates with results from the NHIS/NIPRCS for the same years. Both the NIS and the NHIS/NIPRCS produce similar results. The NHIS/NIPRCS supports the findings of the NIS.
Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...
Amano, Nobuko; Nakamura, Tomiyo
2018-02-01
The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.
A multiple regression analysis for accurate background subtraction in 99Tcm-DTPA renography
International Nuclear Information System (INIS)
Middleton, G.W.; Thomson, W.H.; Davies, I.H.; Morgan, A.
1989-01-01
A technique for accurate background subtraction in 99 Tc m -DTPA renography is described. The technique is based on a multiple regression analysis of the renal curves and separate heart and soft tissue curves which together represent background activity. It is compared, in over 100 renograms, with a previously described linear regression technique. Results show that the method provides accurate background subtraction, even in very poorly functioning kidneys, thus enabling relative renal filtration and excretion to be accurately estimated. (author)
Towards accurate emergency response behavior
International Nuclear Information System (INIS)
Sargent, T.O.
1981-01-01
Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail
When Is Network Lasso Accurate?
Directory of Open Access Journals (Sweden)
Alexander Jung
2018-01-01
Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.
International Nuclear Information System (INIS)
Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa
2011-01-01
Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)
Spectrally accurate contour dynamics
International Nuclear Information System (INIS)
Van Buskirk, R.D.; Marcus, P.S.
1994-01-01
We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use
Fast and accurate methods for phylogenomic analyses
Directory of Open Access Journals (Sweden)
Warnow Tandy
2011-10-01
Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.
Accurate determination of antenna directivity
DEFF Research Database (Denmark)
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...
International Nuclear Information System (INIS)
Deslattes, R.D.
1987-01-01
Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data
Directory of Open Access Journals (Sweden)
Denna Michael
2014-01-01
Full Text Available Introduction: Spectrum epidemiological models are used by UNAIDS to provide global, regional and national HIV estimates and projections, which are then used for evidence-based health planning for HIV services. However, there are no validations of the Spectrum model against empirical serological and mortality data from populations in sub-Saharan Africa. Methods: Serologic, demographic and verbal autopsy data have been regularly collected among over 30,000 residents in north-western Tanzania since 1994. Five-year age-specific mortality rates (ASMRs per 1,000 person years and the probability of dying between 15 and 60 years of age (45Q15, were calculated and compared with the Spectrum model outputs. Mortality trends by HIV status are shown for periods before the introduction of antiretroviral therapy (1994–1999, 2000–2005 and the first 5 years afterwards (2005–2009. Results: Among 30–34 year olds of both sexes, observed ASMRs per 1,000 person years were 13.33 (95% CI: 10.75–16.52 in the period 1994–1999, 11.03 (95% CI: 8.84–13.77 in 2000–2004, and 6.22 (95% CI; 4.75–8.15 in 2005–2009. Among the same age group, the ASMRs estimated by the Spectrum model were 10.55, 11.13 and 8.15 for the periods 1994–1999, 2000–2004 and 2005–2009, respectively. The cohort data, for both sexes combined, showed that the 45Q15 declined from 39% (95% CI: 27–55% in 1994 to 22% (95% CI: 17–29% in 2009, whereas the Spectrum model predicted a decline from 43% in 1994 to 37% in 2009. Conclusion: From 1994 to 2009, the observed decrease in ASMRs was steeper in younger age groups than that predicted by the Spectrum model, perhaps because the Spectrum model under-estimated the ASMRs in 30–34 year olds in 1994–99. However, the Spectrum model predicted a greater decrease in 45Q15 mortality than observed in the cohort, although the reasons for this over-estimate are unclear.
International Nuclear Information System (INIS)
Moore, B; Brady, S; Kaufman, R; Mirro, A
2014-01-01
Purpose: Investigate the correlation of SSDE with organ dose in a pediatric population. Methods: Four anthropomorphic phantoms, representing a range of pediatric body habitus, were scanned with MOSFET dosimeters placed at 23 organ locations to determine absolute organ dosimetry. Phantom organ dosimetry was divided by phantom SSDE to determine correlation between organ dose and SSDE. Correlation factors were then multiplied by patient SSDE to estimate patient organ dose. Patient demographics consisted of 352 chest and 241 abdominopelvic CT examinations, 22 ± 15 kg (range 5−55 kg) mean weight, and 6 ± 5 years (range 4 mon to 23 years) mean age. Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm. 23 organ correlation factors were determined in the chest and abdominopelvic region across nine pediatric weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7−1.4) and abdominopelvic (average 0.9; range 0.7−1.3) was near unity. For organs that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1−0.4) for both the chest and abdominopelvic regions, respectively. Pediatric organ dosimetry was compared to published values and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusion: Average correlation of SSDE and organ dosimetry was found to be better than ± 10% for fully covered organs within the scan volume. This study provides a list of organ dose correlation factors for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE
Energy Technology Data Exchange (ETDEWEB)
Moore, B; Brady, S; Kaufman, R [St Jude Children' s Research Hospital, Memphis, TN (United States); Mirro, A [Washington University, St. Louis, MO (United States)
2014-06-15
Purpose: Investigate the correlation of SSDE with organ dose in a pediatric population. Methods: Four anthropomorphic phantoms, representing a range of pediatric body habitus, were scanned with MOSFET dosimeters placed at 23 organ locations to determine absolute organ dosimetry. Phantom organ dosimetry was divided by phantom SSDE to determine correlation between organ dose and SSDE. Correlation factors were then multiplied by patient SSDE to estimate patient organ dose. Patient demographics consisted of 352 chest and 241 abdominopelvic CT examinations, 22 ± 15 kg (range 5−55 kg) mean weight, and 6 ± 5 years (range 4 mon to 23 years) mean age. Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm. 23 organ correlation factors were determined in the chest and abdominopelvic region across nine pediatric weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7−1.4) and abdominopelvic (average 0.9; range 0.7−1.3) was near unity. For organs that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1−0.4) for both the chest and abdominopelvic regions, respectively. Pediatric organ dosimetry was compared to published values and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusion: Average correlation of SSDE and organ dosimetry was found to be better than ± 10% for fully covered organs within the scan volume. This study provides a list of organ dose correlation factors for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.
Baldini, Christopher G; Culley, Eric J
2011-01-01
A large managed care organization (MCO) in western Pennsylvania initiated a Medical Injectable Drug (MID) program in 2002 that transferred a specific subset of specialty drugs from physician reimbursement under the traditional "buy-and-bill" model in the medical benefit to MCO purchase from a specialty pharmacy provider (SPP) that supplied physician offices with the MIDs. The MID program was initiated with 4 drugs in 2002 (palivizumab and 3 hyaluronate products/derivatives) growing to more than 50 drugs by 2007-2008. To (a) describe the MID program as a method to manage the cost and delivery of this subset of specialty drugs, and (b) estimate the MID program cost savings in 2007 and 2008 in an MCO with approximately 4.6 million members. Cost savings generated by the MID program were calculated by comparing the total actual expenditure (plan cost plus member cost) on medications included in the MID program for calendar years 2007 and 2008 with the total estimated expenditure that would have been paid to physicians during the same time period for the same medication if reimbursement had been made using HCPCS (J code) billing under the physician "buy-and-bill" reimbursement rates. For the approximately 50 drugs in the MID program in 2007 and 2008, the drug cost savings in 2007 were estimated to be $15.5 million (18.2%) or $290 per claim ($0.28 per member per month [PMPM]) and about $13 million (12.7%) or $201 per claim ($0.23 PMPM) in 2008. Although 28% of MID claims continued to be billed by physicians using J codes in 2007 and 22% in 2008, all claims for MIDs were limited to the SPP reimbursement rates. This MID program was associated with health plan cost savings of approximately $28.5 million over 2 years, achieved by the transfer of about 50 physician-administered injectable pharmaceuticals from reimbursement to physicians to reimbursement to a single SPP and payment of physician claims for MIDs at the SPP reimbursement rates.
Steffensen Schmidt, Louise; Aðalgeirsdóttir, Guðfinna; Guðmundsson, Sverrir; Langen, Peter L.; Pálsson, Finnur; Mottram, Ruth; Gascoin, Simon; Björnsson, Helgi
2017-07-01
A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980-2014, is used to estimate the evolution of the glacier surface mass balance (SMB). This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs) from the period 2001-2014, as well as in situ SMB measurements from the period 1995-2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995-2014) shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981-2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes currently not accounted for in RCMs
Directory of Open Access Journals (Sweden)
L. S. Schmidt
2017-07-01
Full Text Available A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980–2014, is used to estimate the evolution of the glacier surface mass balance (SMB. This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs from the period 2001–2014, as well as in situ SMB measurements from the period 1995–2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995–2014 shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981–2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes
Equipment upgrade - Accurate positioning of ion chambers
International Nuclear Information System (INIS)
Doane, Harry J.; Nelson, George W.
1990-01-01
Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described
Directory of Open Access Journals (Sweden)
T ŽNIDARŠIČ
2003-10-01
Full Text Available The aim of this work was to examine the necessity of using the standard sample at the Hohenheim gas test. During a three year period, 24 runs of forage samples were incubated with rumen liquor in vitro. Beside the forage samples also the standard hay sample provided by the Hohenheim University (HFT-99 was included in the experiment. Half of the runs were incubated with rumen liquor of cattle and half with the rumen liquor of sheep. Gas produced during the 24 h incubation of standard sample was measured and compared to a declared value of sample HFT-99. Beside HFT-99, 25 test samples with known digestibility coefficients determined in vivo were included in the experiment. Based on the gas production of HFT-99, it was found that donor animal (cattle or sheep did not significantly affect the activity of rumen liquor (41.4 vs. 42.2 ml of gas per 200 mg dry matter, P>0.1. Neither differences between years (41.9, 41.2 and 42.3 ml of gas per 200 mg dry matter, P>0.1 were significant. However, a variability of about 10% (from 38.9 to 43.7 ml of gas per 200 mg dry matter was observed between runs. In the present experiment, the gas production in HFT-99 was about 6% lower than the value obtained by the Hohenheim University (41.8 vs. 44.43 ml per 200 mg dry matter. This indicates a systematic error between the laboratories. In the case of twenty-five test samples, correction on the basis of the standard sample reduced the average difference of the in vitro estimates of net energy for lactation (NEL from the in vivo determined values. It was concluded that, due to variation between runs and systematical differences in rumen liquor activity between two laboratories, the results of Hohenheim gas test have to be corrected on the basis of standard sample.
Rogawski, Elizabeth T; Platts-Mills, James A; Colgate, E Ross; Haque, Rashidul; Zaman, K; Petri, William A; Kirkpatrick, Beth D
2018-03-05
The low efficacy of rotavirus vaccines in clinical trials performed in low-resource settings may be partially explained by acquired immunity from natural exposure, especially in settings with high disease incidence. In a clinical trial of monovalent rotavirus vaccine in Bangladesh, we compared the original per-protocol efficacy estimate to efficacy derived from a recurrent events survival model in which children were considered naturally exposed and potentially immune after their first rotavirus diarrhea (RVD) episode. We then simulated trial cohorts to estimate the expected impact of prior exposure on efficacy estimates for varying rotavirus incidence rates and vaccine efficacies. Accounting for natural immunity increased the per-protocol vaccine efficacy estimate against severe RVD from 63.1% (95% confidence interval [CI], 33.0%-79.7%) to 70.2% (95% CI, 44.5%-84.0%) in the postvaccination period, and original year 2 efficacy was underestimated by 14%. The simulations demonstrated that this expected impact increases linearly with RVD incidence, will be greatest for vaccine efficacies near 50%, and can reach 20% in settings with high incidence and low efficacy. High rotavirus incidence leads to predictably lower vaccine efficacy estimates due to the acquisition of natural immunity in unvaccinated children, and this phenomenon should be considered when comparing efficacy estimates across settings. NCT01375647.
Carey, Mary G; Luisi, Andrew J; Baldwa, Sunil; Al-Zaiti, Salah; Veneziano, Marc J; deKemp, Robert A; Canty, John M; Fallavollita, James A
2010-01-01
Infarct volume independently predicts cardiovascular events. Fragmented QRS complexes (fQRS) may complement Q waves for identifying infarction; however, their utility in advanced coronary disease is unknown. We tested whether fQRS could improve the electrocardiographic prediction of infarct volume by positron emission tomography in 138 patients with ischemic cardiomyopathy (ejection fraction, 0.27 +/- 0.09). Indices of infarction (pathologic Q waves, fQRS, and Selvester QRS Score) were analyzed by blinded observers. In patients with QRS duration less than 120 milliseconds, number of leads with pathologic Q waves (mean, 1.6 +/- 1.7) correlated weakly with infarct volume (r = 0.30, P wave prediction of infarct volume; but Selvester Score was more accurate. Published by Elsevier Inc.
Earle, P.S.; Wald, D.J.; Allen, T.I.; Jaiswal, K.S.; Porter, K.A.; Hearne, M.G.
2008-01-01
One half-hour after the May 12th Mw 7.9 Wenchuan, China earthquake, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) system distributed an automatically generated alert stating that 1.2 million people were exposed to severe-to-extreme shaking (Modified Mercalli Intensity VIII or greater). It was immediately clear that a large-scale disaster had occurred. These alerts were widely distributed and referenced by the major media outlets and used by governments, scientific, and relief agencies to guide their responses. The PAGER alerts and Web pages included predictive ShakeMaps showing estimates of ground shaking, maps of population density, and a list of estimated intensities at impacted cities. Manual, revised alerts were issued in the following hours that included the dimensions of the fault rupture. Within a half-day, PAGER’s estimates of the population exposed to strong shaking levels stabilized at 5.2 million people. A coordinated research effort is underway to extend PAGER’s capability to include estimates of the number of casualties. We are pursuing loss models that will allow PAGER the flexibility to use detailed inventory and engineering results in regions where these data are available while also calculating loss estimates in regions where little is known about the type and strength of the built infrastructure. Prototype PAGER fatality estimates are currently implemented and can be manually triggered. In the hours following the Wenchuan earthquake, these models predicted fatalities in the tens of thousands.
Can blind persons accurately assess body size from the voice?
Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka
2016-04-01
Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).
Accurate thickness measurement of graphene
International Nuclear Information System (INIS)
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-01-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)
ASTRAL, DRAGON and SEDAN scores predict stroke outcome more accurately than physicians.
Ntaios, G; Gioulekas, F; Papavasileiou, V; Strbian, D; Michel, P
2016-11-01
ASTRAL, SEDAN and DRAGON scores are three well-validated scores for stroke outcome prediction. Whether these scores predict stroke outcome more accurately compared with physicians interested in stroke was investigated. Physicians interested in stroke were invited to an online anonymous survey to provide outcome estimates in randomly allocated structured scenarios of recent real-life stroke patients. Their estimates were compared to scores' predictions in the same scenarios. An estimate was considered accurate if it was within 95% confidence intervals of actual outcome. In all, 244 participants from 32 different countries responded assessing 720 real scenarios and 2636 outcomes. The majority of physicians' estimates were inaccurate (1422/2636, 53.9%). 400 (56.8%) of physicians' estimates about the percentage probability of 3-month modified Rankin score (mRS) > 2 were accurate compared with 609 (86.5%) of ASTRAL score estimates (P DRAGON score estimates (P DRAGON score estimates (P DRAGON and SEDAN scores predict outcome of acute ischaemic stroke patients with higher accuracy compared to physicians interested in stroke. © 2016 EAN.
Zhou, Huimin; Xiao, Qiaoling; Tan, Wen; Zhan, Yiyi; Pistolozzi, Marco
2017-09-10
Several molecules containing carbamate groups are metabolized by cholinesterases. This metabolism includes a time-dependent catalytic step which temporary inhibits the enzymes. In this paper we demonstrate that the analysis of the area under the inhibition versus time curve (AUIC) can be used to obtain a quantitative estimation of the amount of carbamate metabolized by the enzyme. (R)-bambuterol monocarbamate and plasma butyrylcholinesterase were used as model carbamate-cholinesterase system. The inhibition of different concentrations of the enzyme was monitored for 5h upon incubation with different concentrations of carbamate and the resulting AUICs were analyzed. The amount of carbamate metabolized could be estimated with cholinesterases in a selected compartment in which the cholinesterase is confined (e.g. in vitro solutions, tissues or body fluids), either in vitro or in vivo. Copyright © 2017 Elsevier B.V. All rights reserved.
Bontemps , Jean-Daniel; Esper , Jan
2011-01-01
International audience; Dendrochronological methods have greatly contributed to the documentation of past long-term trends in forest growth. These methods primarily focus on the high-frequency signals of tree ring chronologies. They require the removal of the ageing trend in tree growth, known as 'standardisation' or 'detrending', as a prerequisite to the estimation of such trends. Because the approach is sequential, it may however absorb part of the low-frequency historical signal. In this s...
Directory of Open Access Journals (Sweden)
Manuel G Scotto
2003-12-01
Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.
Liu, Xiaowei; Saydah, Benjamin; Eranki, Pragnya; Colosi, Lisa M; Greg Mitchell, B; Rhodes, James; Clarens, Andres F
2013-11-01
Life cycle assessment (LCA) has been used widely to estimate the environmental implications of deploying algae-to-energy systems even though no full-scale facilities have yet to be built. Here, data from a pilot-scale facility using hydrothermal liquefaction (HTL) is used to estimate the life cycle profiles at full scale. Three scenarios (lab-, pilot-, and full-scale) were defined to understand how development in the industry could impact its life cycle burdens. HTL-derived algae fuels were found to have lower greenhouse gas (GHG) emissions than petroleum fuels. Algae-derived gasoline had significantly lower GHG emissions than corn ethanol. Most algae-based fuels have an energy return on investment between 1 and 3, which is lower than petroleum biofuels. Sensitivity analyses reveal several areas in which improvements by algae bioenergy companies (e.g., biocrude yields, nutrient recycle) and by supporting industries (e.g., CO2 supply chains) could reduce the burdens of the industry. Copyright © 2013 Elsevier Ltd. All rights reserved.
The accurate particle tracer code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun
2017-11-01
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.
Schlittenhardt, J.
- A comparison of regional and teleseismic log rms (root-mean-square) Lg amplitude measurements have been made for 14 underground nuclear explosions from the East Kazakh test site recorded both by the BRV (Borovoye) station in Kazakhstan and the GRF (Gräfenberg) array in Germany. The log rms Lg amplitudes observed at the BRV regional station at a distance of 690km and at the teleseismic GRF array at a distance exceeding 4700km show very similar relative values (standard deviation 0.048 magnitude units) for underground explosions of different sizes at the Shagan River test site. This result as well as the comparison of BRV rms Lg magnitudes (which were calculated from the log rms amplitudes using an appropriate calibration) with magnitude determinations for P waves of global seismic networks (standard deviation 0.054 magnitude units) point to a high precision in estimating the relative source sizes of explosions from Lg-based single station data. Similar results were also obtained by other investigators (Patton, 1988; Ringdaletal., 1992) using Lg data from different stations at different distances.Additionally, GRF log rms Lg and P-coda amplitude measurements were made for a larger data set from Novaya Zemlya and East Kazakh explosions, which were supplemented with mb(Lg) amplitude measurements using a modified version of Nuttli's (1973, 1986a) method. From this test of the relative performance of the three different magnitude scales, it was found that the Lg and P-coda based magnitudes performed equally well, whereas the modified Nuttli mb(Lg) magnitudes show greater scatter when compared to the worldwide mb reference magnitudes. Whether this result indicates that the rms amplitude measurements are superior to the zero-to-peak amplitude measurement of a single cycle used for the modified Nuttli method, however, cannot be finally assessed, since the calculated mb(Lg) magnitudes are only preliminary until appropriate attenuation corrections are available for the
Roszkiewicz, Malgorzata
2004-01-01
The results of studies conducted in the last 5 years in Poland formed the basis for the assumption that amongst many needs an individual or a Polish household seeks to satisfy, the need to provide for security in old age takes a prominent position. Determining the position of this need among other needs as defined in Schrab's classification…
Generalized estimating equations
Hardin, James W
2002-01-01
Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th
NUMATH: a nuclear material holdup estimator for unit operations and chemical processes
International Nuclear Information System (INIS)
Krichinsky, A.M.
1982-01-01
NUMATH provides inventory estimation by utilizing previous inventory measurements, operating data, and, where available, on-line process measurements. For the present time, NUMATH's purpose is to provide a reasonable, near-real-time estimate of material inventory until accurate inventory determination can be obtained from chemical analysis. Ultimately, it is intended that NUMATH will further utilize on-line analyzers and more advanced calculational techniques to provide more accurate inventory determinations and estimates
Energy Technology Data Exchange (ETDEWEB)
Noël, Marie, E-mail: marie.noel@stantec.com [Stantec Consulting Ltd. 2042 Mills Road, Unit 11, Sidney BC V8L 4X2 (Canada); Christensen, Jennie R., E-mail: jennie.christensen@stantec.com [Stantec Consulting Ltd. 2042 Mills Road, Unit 11, Sidney BC V8L 4X2 (Canada); Spence, Jody, E-mail: jodys@uvic.ca [School of Earth and Ocean Sciences, Bob Wright Centre A405, University of Victoria, PO BOX 3065 STN CSC, Victoria, BC V8W 3V6 (Canada); Robbins, Charles T., E-mail: ctrobbins@wsu.edu [School of the Environment and School of Biological Sciences, Washington State University, Pullman, WA 99164-4236 (United States)
2015-10-01
We enhanced an existing technique, laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), to function as a non-lethal tool in the temporal characterization of trace element exposure in wild mammals. Mercury (Hg), copper (Cu), cadmium (Cd), lead (Pb), iron (Fe) and zinc (Zn) were analyzed along the hair of captive and wild grizzly bears (Ursus arctos horribilis). Laser parameters were optimized (consecutive 2000 μm line scans along the middle line of the hair at a speed of 50 μm/s; spot size = 30 μm) for consistent ablation of the hair. A pressed pellet of reference material DOLT-2 and sulfur were used as external and internal standards, respectively. Our newly adapted method passed the quality control tests with strong correlations between trace element concentrations obtained using LA-ICP-MS and those obtained with regular solution-ICP-MS (r{sup 2} = 0.92, 0.98, 0.63, 0.57, 0.99 and 0.90 for Hg, Fe, Cu, Zn, Cd and Pb, respectively). Cross-correlation analyses revealed good reproducibility between trace element patterns obtained from hair collected from the same bear. One exception was Cd for which external contamination was observed resulting in poor reproducibility. In order to validate the method, we used LA-ICP-MS on the hair of five captive grizzly bears fed known and varying amounts of cutthroat trout over a period of 33 days. Trace element patterns along the hair revealed strong Hg, Cu and Zn signals coinciding with fish consumption. Accordingly, significant correlations between Hg, Cu, and Zn in the hair and Hg, Cu, and Zn intake were evident and we were able to develop accumulation models for each of these elements. While the use of LA-ICP-MS for the monitoring of trace elements in wildlife is in its infancy, this study highlights the robustness and applicability of this newly adapted method. - Highlights: • LA-ICP-MS provides temporal trace metal exposure information for wild grizzly bears. • Cu and Zn temporal exposures provide
International Nuclear Information System (INIS)
Noël, Marie; Christensen, Jennie R.; Spence, Jody; Robbins, Charles T.
2015-01-01
We enhanced an existing technique, laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), to function as a non-lethal tool in the temporal characterization of trace element exposure in wild mammals. Mercury (Hg), copper (Cu), cadmium (Cd), lead (Pb), iron (Fe) and zinc (Zn) were analyzed along the hair of captive and wild grizzly bears (Ursus arctos horribilis). Laser parameters were optimized (consecutive 2000 μm line scans along the middle line of the hair at a speed of 50 μm/s; spot size = 30 μm) for consistent ablation of the hair. A pressed pellet of reference material DOLT-2 and sulfur were used as external and internal standards, respectively. Our newly adapted method passed the quality control tests with strong correlations between trace element concentrations obtained using LA-ICP-MS and those obtained with regular solution-ICP-MS (r 2 = 0.92, 0.98, 0.63, 0.57, 0.99 and 0.90 for Hg, Fe, Cu, Zn, Cd and Pb, respectively). Cross-correlation analyses revealed good reproducibility between trace element patterns obtained from hair collected from the same bear. One exception was Cd for which external contamination was observed resulting in poor reproducibility. In order to validate the method, we used LA-ICP-MS on the hair of five captive grizzly bears fed known and varying amounts of cutthroat trout over a period of 33 days. Trace element patterns along the hair revealed strong Hg, Cu and Zn signals coinciding with fish consumption. Accordingly, significant correlations between Hg, Cu, and Zn in the hair and Hg, Cu, and Zn intake were evident and we were able to develop accumulation models for each of these elements. While the use of LA-ICP-MS for the monitoring of trace elements in wildlife is in its infancy, this study highlights the robustness and applicability of this newly adapted method. - Highlights: • LA-ICP-MS provides temporal trace metal exposure information for wild grizzly bears. • Cu and Zn temporal exposures provide
Fast and accurate determination of modularity and its effect size
International Nuclear Information System (INIS)
Treviño, Santiago III; Nyberg, Amy; Bassler, Kevin E; Del Genio, Charo I
2015-01-01
We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erdős–Rényi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a z-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links. (paper)
A Modified Proportional Navigation Guidance for Accurate Target Hitting
Directory of Open Access Journals (Sweden)
A. Moharampour
2010-03-01
First, the pure proportional navigation guidance (PPNG in 3-dimensional state is explained in a new point of view. The main idea is based on the distinction between angular rate vector and rotation vector conceptions. The current innovation is based on selection of line of sight (LOS coordinates. A comparison between two available choices for LOS coordinates system is proposed. An improvement is made by adding two additional terms. First term includes a cross range compensator which is used to provide and enhance path observability, and obtain convergent estimates of state variables. The second term is new concept lead bias term, which has been calculated by assuming an equivalent acceleration along the target longitudinal axis. Simulation results indicate that the lead bias term properly provides terminal conditions for accurate target interception.
A practical method for accurate quantification of large fault trees
International Nuclear Information System (INIS)
Choi, Jong Soo; Cho, Nam Zin
2007-01-01
This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees
Accurate and efficient calculation of response times for groundwater flow
Carr, Elliot J.; Simpson, Matthew J.
2018-03-01
We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.
Medicare Provider Data - Hospice Providers
U.S. Department of Health & Human Services — The Hospice Utilization and Payment Public Use File provides information on services provided to Medicare beneficiaries by hospice providers. The Hospice PUF...
Highly Accurate Prediction of Jobs Runtime Classes
Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi
2016-01-01
Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...
The Accurate Particle Tracer Code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi
2016-01-01
The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...
Accurate e/sup -/-He cross sections below 19 eV
Energy Technology Data Exchange (ETDEWEB)
Nesbet, R K [International Business Machines Corp., San Jose, CA (USA). Research Lab.
1979-04-14
Variational calculations of e/sup -/-He s- and p-wave phaseshifts, together with the Born formula for higher partial waves, are used to give the scattering amplitude to within one per cent estimated accuracy for energies less than 19 eV. Coefficients are given of cubic spline fits to auxiliary functions that provide smooth interpolation of the estimated accurate phaseshifts. Data given here make it possible to obtain the differential scattering cross section over the energy range considered from simple formulae.
Directory of Open Access Journals (Sweden)
Wendy Furlan
2006-12-01
Full Text Available Objective – To determine how successfulthe link resolver, SFX, is in meeting the expectations of library users and librarians.Design – Analysis of an online user survey, library staff focus groups, retrospective analysis of system statistics, and test searches.Setting – Two California State University campus libraries in the United States: Northbridge, with over 31,000 students on campus, and San Marcos, with over 7,300 students on campus.Subjects – A total of 453 online survey responses were submitted from library users, 421 from Northbridge and 32 from SanMarcos. Twenty librarians took part in the focus groups conducted with library staff consisting of 14 of the 23 librarians from Northbridge (2 from technical services and 12 from public services, and 6 of the 10 San Marcos librarians (3 from technical services and 3 from public services. No further information was provided on the characteristics of the subjects.Methods – An online survey was offered to users of the two campus libraries for a two week period in May 2004. The survey consisted of 8 questions, 7 fixed response and 1 free text. Survey distribution was enabled via a different mechanism at each campus. The Northbridge library offered the survey to users via a pop‐up window each time the SFX service was clicked on, while the San Marcos library presented the survey as a link from the library’s home page. Survey responses from both campuses were combined and analysed together. Focus groups were conducted with librarians from each campus library on April 20th, 21st, and 29th, 2004. Librarians attended focus groups only with others from their own campus. Statistics were gathered from each campus’ local SFX system for the 3‐month period from September 14, 2004, to December 14,2004. Statistics from each campus were combined for analysis. The authors also conducted 224 test searches over the 3‐month period from July to September, 2004.Main results – Analysis of the
How accurately can 21cm tomography constrain cosmology?
Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver
2008-07-01
There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Accurate Modeling of Advanced Reflectarrays
DEFF Research Database (Denmark)
Zhou, Min
to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
Accurate predictions for the LHC made easy
CERN. Geneva
2014-01-01
The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.
Velocity Estimation of the Main Portal Vein with Transverse Oscillation
DEFF Research Database (Denmark)
Brandt, Andreas Hjelm; Hansen, Kristoffer Lindskov; Nielsen, Michael Bachmann
2015-01-01
This study evaluates if Transverse Oscillation (TO) can provide reliable and accurate peak velocity estimates of blood flow the main portal vein. TO was evaluated against the recommended and most widely used technique for portal flow estimation, Spectral Doppler Ultrasound (SDU). The main portal...
Tridimensional pose estimation of a person head
International Nuclear Information System (INIS)
Perez Berenguer, Elisa; Soria, Carlos; Nasisi, Oscar; Mut, Vicente
2007-01-01
In this work, we present a method for estimating 3-D motion parameters; this method provides an alternative way for 3D head pose estimation from image sequence in the current computer vision literature. This method is robust over extended sequences and large head motions and accurately extracts the orientation angles of head from a single view. Experimental results show that this tracking system works well for development a human-computer interface for people that possess severe motor incapacity
Accurate Recovery of H i Velocity Dispersion from Radio Interferometers
Energy Technology Data Exchange (ETDEWEB)
Ianjamasimanana, R. [Max-Planck Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Blok, W. J. G. de [Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, 7990 AA Dwingeloo (Netherlands); Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV, Groningen (Netherlands)
2017-05-01
Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by The H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.
Dual states estimation of a subsurface flow-transport coupled model using ensemble Kalman filtering
El Gharamti, Mohamad; Hoteit, Ibrahim; Valstar, Johan R.
2013-01-01
Modeling the spread of subsurface contaminants requires coupling a groundwater flow model with a contaminant transport model. Such coupling may provide accurate estimates of future subsurface hydrologic states if essential flow and contaminant data
Establishing Accurate and Sustainable Geospatial Reference Layers in Developing Countries
Seaman, V. Y.
2017-12-01
Accurate geospatial reference layers (settlement names & locations, administrative boundaries, and population) are not readily available for most developing countries. This critical information gap makes it challenging for governments to efficiently plan, allocate resources, and provide basic services. It also hampers international agencies' response to natural disasters, humanitarian crises, and other emergencies. The current work involves a recent successful effort, led by the Bill & Melinda Gates Foundation and the Government of Nigeria, to obtain such data. The data collection began in 2013, with local teams collecting names, coordinates, and administrative attributes for over 100,000 settlements using ODK-enabled smartphones. A settlement feature layer extracted from satellite imagery was used to ensure all settlements were included. Administrative boundaries (Ward, LGA) were created using the settlement attributes. These "new" boundary layers were much more accurate than existing shapefiles used by the government and international organizations. The resulting data sets helped Nigeria eradicate polio from all areas except in the extreme northeast, where security issues limited access and vaccination activities. In addition to the settlement and boundary layers, a GIS-based population model was developed, in partnership with Oak Ridge National Laboratories and Flowminder), that used the extracted settlement areas and characteristics, along with targeted microcensus data. This model provides population and demographics estimates independent of census or other administrative data, at a resolution of 90 meters. These robust geospatial data layers found many other uses, including establishing catchment area settlements and populations for health facilities, validating denominators for population-based surveys, and applications across a variety of government sectors. Based on the success of the Nigeria effort, a partnership between DfID and the Bill & Melinda Gates
Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems
Directory of Open Access Journals (Sweden)
H. Vincent Poor
2008-05-01
Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.
Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations
International Nuclear Information System (INIS)
Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim
2011-01-01
A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.
Gould, Ian R; Wosinska, Zofia M; Farid, Samir
2006-01-01
Accurate oxidation potentials for organic compounds are critical for the evaluation of thermodynamic and kinetic properties of their radical cations. Except when using a specialized apparatus, electrochemical oxidation of molecules with reactive radical cations is usually an irreversible process, providing peak potentials, E(p), rather than thermodynamically meaningful oxidation potentials, E(ox). In a previous study on amines with radical cations that underwent rapid decarboxylation, we estimated E(ox) by correcting the E(p) from cyclic voltammetry with rate constants for decarboxylation obtained using laser flash photolysis. Here we use redox equilibration experiments to determine accurate relative oxidation potentials for the same amines. We also describe an extension of these experiments to show how relative oxidation potentials can be obtained in the absence of equilibrium, from a complete kinetic analysis of the reversible redox kinetics. The results provide support for the previous cyclic voltammetry/laser flash photolysis method for determining oxidation potentials.
Accurate determination of light elements by charged particle activation analysis
International Nuclear Information System (INIS)
Shikano, K.; Shigematsu, T.
1989-01-01
To develop accurate determination of light elements by CPAA, accurate and practical standardization methods and uniform chemical etching are studied based on determination of carbon in gallium arsenide using the 12 C(d,n) 13 N reaction and the following results are obtained: (1)Average stopping power method with thick target yield is useful as an accurate and practical standardization method. (2)Front surface of sample has to be etched for accurate estimate of incident energy. (3)CPAA is utilized for calibration of light element analysis by physical method. (4)Calibration factor of carbon analysis in gallium arsenide using the IR method is determined to be (9.2±0.3) x 10 15 cm -1 . (author)
Using an eye tracker for accurate eye movement artifact correction
Kierkels, J.J.M.; Riani, J.; Bergmans, J.W.M.; Boxtel, van G.J.M.
2007-01-01
We present a new method to correct eye movement artifacts in electroencephalogram (EEG) data. By using an eye tracker, whose data cannot be corrupted by any electrophysiological signals, an accurate method for correction is developed. The eye-tracker data is used in a Kalman filter to estimate which
Accurate Holdup Calculations with Predictive Modeling & Data Integration
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering
2017-04-03
In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use
Accurate line intensities of methane from first-principles calculations
Nikitin, Andrei V.; Rey, Michael; Tyuterev, Vladimir G.
2017-10-01
In this work, we report first-principle theoretical predictions of methane spectral line intensities that are competitive with (and complementary to) the best laboratory measurements. A detailed comparison with the most accurate data shows that discrepancies in integrated polyad intensities are in the range of 0.4%-2.3%. This corresponds to estimations of the best available accuracy in laboratory Fourier Transform spectra measurements for this quantity. For relatively isolated strong lines the individual intensity deviations are in the same range. A comparison with the most precise laser measurements of the multiplet intensities in the 2ν3 band gives an agreement within the experimental error margins (about 1%). This is achieved for the first time for five-atomic molecules. In the Supplementary Material we provide the lists of theoretical intensities at 269 K for over 5000 strongest transitions in the range below 6166 cm-1. The advantage of the described method is that this offers a possibility to generate fully assigned exhaustive line lists at various temperature conditions. Extensive calculations up to 12,000 cm-1 including high-T predictions will be made freely available through the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru) that contains ab initio born line lists and provides a user-friendly graphical interface for a fast simulation of the absorption cross-sections and radiance.
Accurate control testing for clay liner permeability
Energy Technology Data Exchange (ETDEWEB)
Mitchell, R J
1991-08-01
Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.
Star-sensor-based predictive Kalman filter for satelliteattitude estimation
Institute of Scientific and Technical Information of China (English)
林玉荣; 邓正隆
2002-01-01
A real-time attitude estimation algorithm, namely the predictive Kalman filter, is presented. This algorithm can accurately estimate the three-axis attitude of a satellite using only star sensor measurements. The implementation of the filter includes two steps: first, predicting the torque modeling error, and then estimating the attitude. Simulation results indicate that the predictive Kalman filter provides robust performance in the presence of both significant errors in the assumed model and in the initial conditions.
Easy Leaf Area: Automated digital image analysis for rapid and accurate measurement of leaf area.
Easlon, Hsien Ming; Bloom, Arnold J
2014-07-01
Measurement of leaf areas from digital photographs has traditionally required significant user input unless backgrounds are carefully masked. Easy Leaf Area was developed to batch process hundreds of Arabidopsis rosette images in minutes, removing background artifacts and saving results to a spreadsheet-ready CSV file. • Easy Leaf Area uses the color ratios of each pixel to distinguish leaves and calibration areas from their background and compares leaf pixel counts to a red calibration area to eliminate the need for camera distance calculations or manual ruler scale measurement that other software methods typically require. Leaf areas estimated by this software from images taken with a camera phone were more accurate than ImageJ estimates from flatbed scanner images. • Easy Leaf Area provides an easy-to-use method for rapid measurement of leaf area and nondestructive estimation of canopy area from digital images.
Easy Leaf Area: Automated Digital Image Analysis for Rapid and Accurate Measurement of Leaf Area
Directory of Open Access Journals (Sweden)
Hsien Ming Easlon
2014-07-01
Full Text Available Premise of the study: Measurement of leaf areas from digital photographs has traditionally required significant user input unless backgrounds are carefully masked. Easy Leaf Area was developed to batch process hundreds of Arabidopsis rosette images in minutes, removing background artifacts and saving results to a spreadsheet-ready CSV file. Methods and Results: Easy Leaf Area uses the color ratios of each pixel to distinguish leaves and calibration areas from their background and compares leaf pixel counts to a red calibration area to eliminate the need for camera distance calculations or manual ruler scale measurement that other software methods typically require. Leaf areas estimated by this software from images taken with a camera phone were more accurate than ImageJ estimates from flatbed scanner images. Conclusions: Easy Leaf Area provides an easy-to-use method for rapid measurement of leaf area and nondestructive estimation of canopy area from digital images.
Accurate computation of Mathieu functions
Bibby, Malcolm M
2013-01-01
This lecture presents a modern approach for the computation of Mathieu functions. These functions find application in boundary value analysis such as electromagnetic scattering from elliptic cylinders and flat strips, as well as the analogous acoustic and optical problems, and many other applications in science and engineering. The authors review the traditional approach used for these functions, show its limitations, and provide an alternative ""tuned"" approach enabling improved accuracy and convergence. The performance of this approach is investigated for a wide range of parameters and mach
Accurate fluid force measurement based on control surface integration
Lentink, David
2018-01-01
Nonintrusive 3D fluid force measurements are still challenging to conduct accurately for freely moving animals, vehicles, and deforming objects. Two techniques, 3D particle image velocimetry (PIV) and a new technique, the aerodynamic force platform (AFP), address this. Both rely on the control volume integral for momentum; whereas PIV requires numerical integration of flow fields, the AFP performs the integration mechanically based on rigid walls that form the control surface. The accuracy of both PIV and AFP measurements based on the control surface integration is thought to hinge on determining the unsteady body force associated with the acceleration of the volume of displaced fluid. Here, I introduce a set of non-dimensional error ratios to show which fluid and body parameters make the error negligible. The unsteady body force is insignificant in all conditions where the average density of the body is much greater than the density of the fluid, e.g., in gas. Whenever a strongly deforming body experiences significant buoyancy and acceleration, the error is significant. Remarkably, this error can be entirely corrected for with an exact factor provided that the body has a sufficiently homogenous density or acceleration distribution, which is common in liquids. The correction factor for omitting the unsteady body force, {{{ {ρ f}} {1 - {ρ f} ( {{ρ b}+{ρ f}} )}.{( {{{{ρ }}b}+{ρ f}} )}}} , depends only on the fluid, {ρ f}, and body, {{ρ }}b, density. Whereas these straightforward solutions work even at the liquid-gas interface in a significant number of cases, they do not work for generalized bodies undergoing buoyancy in combination with appreciable body density inhomogeneity, volume change (PIV), or volume rate-of-change (PIV and AFP). In these less common cases, the 3D body shape needs to be measured and resolved in time and space to estimate the unsteady body force. The analysis shows that accounting for the unsteady body force is straightforward to non
Kim, Eric H; Weaver, John K; Shetty, Anup S; Vetter, Joel M; Andriole, Gerald L; Strope, Seth A
2017-04-01
To determine the added value of prostate magnetic resonance imaging (MRI) to the Prostate Cancer Prevention Trial risk calculator. Between January 2012 and December 2015, 339 patients underwent prostate MRI prior to biopsy at our institution. MRI was considered positive if there was at least 1 Prostate Imaging Reporting and Data System 4 or 5 MRI suspicious region. Logistic regression was used to develop 2 models: biopsy outcome as a function of the (1) Prostate Cancer Prevention Trial risk calculator alone and (2) combined with MRI findings. When including all patients, the Prostate Cancer Prevention Trial with and without MRI models performed similarly (area under the curve [AUC] = 0.74 and 0.78, P = .06). When restricting the cohort to patients with estimated risk of high-grade (Gleason ≥7) prostate cancer ≤10%, the model with MRI outperformed the Prostate Cancer Prevention Trial alone model (AUC = 0.69 and 0.60, P = .01). Within this cohort of patients, there was no significant difference in discrimination between models for those with previous negative biopsy (AUC = 0.61 vs 0.63, P = .76), whereas there was a significant improvement in discrimination with the MRI model for biopsy-naïve patients (AUC = 0.72 vs 0.60, P = .01). The use of prostate MRI in addition to the Prostate Cancer Prevention Trial risk calculator provides a significant improvement in clinical risk discrimination for patients with estimated risk of high-grade (Gleason ≥7) prostate cancer ≤10%. Prebiopsy prostate MRI should be strongly considered for these patients. Copyright © 2016 Elsevier Inc. All rights reserved.
An accurate bound on tensor-to-scalar ratio and the scale of inflation
International Nuclear Information System (INIS)
Choudhury, Sayantan; Mazumdar, Anupam
2014-01-01
In this paper we provide an accurate bound on primordial gravitational waves, i.e. tensor-to-scalar ratio (r) for a general class of single-field models of inflation where inflation occurs always below the Planck scale, and the field displacement during inflation remains sub-Planckian. If inflation has to make connection with the real particle physics framework then it must be explained within an effective field theory description where it can be trustable below the UV cut-off of the scale of gravity. We provide an analytical estimation and estimate the largest possible r, i.e. r⩽0.12, for the field displacement less than the Planck cut-off
Directory of Open Access Journals (Sweden)
Dobrislav Dobrev∗
2017-02-01
Full Text Available We provide an accurate closed-form expression for the expected shortfall of linear portfolios with elliptically distributed risk factors. Our results aim to correct inaccuracies that originate in Kamdem (2005 and are present also in at least thirty other papers referencing it, including the recent survey by Nadarajah et al. (2014 on estimation methods for expected shortfall. In particular, we show that the correction we provide in the popular multivariate Student t setting eliminates understatement of expected shortfall by a factor varying from at least four to more than 100 across different tail quantiles and degrees of freedom. As such, the resulting economic impact in ﬁnancial risk management applications could be signiﬁcant. We further correct such errors encountered also in closely related results in Kamdem (2007 and 2009 for mixtures of elliptical distributions. More generally, our ﬁndings point to the extra scrutiny required when deploying new methods for expected shortfall estimation in practice.
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2012-01-01
Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.
Sampling designs matching species biology produce accurate and affordable abundance indices.
Harris, Grant; Farley, Sean; Russell, Gareth J; Butler, Matthew J; Selinger, Jeff
2013-01-01
Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km(2) cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions
Sampling designs matching species biology produce accurate and affordable abundance indices
Directory of Open Access Journals (Sweden)
Grant Harris
2013-12-01
Full Text Available Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling, it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS data from 42 Alaskan brown bears (Ursus arctos. Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion, and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture
Sampling designs matching species biology produce accurate and affordable abundance indices
Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff
2013-01-01
Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which
QUESP and QUEST revisited - fast and accurate quantitative CEST experiments.
Zaiss, Moritz; Angelovski, Goran; Demetriou, Eleni; McMahon, Michael T; Golay, Xavier; Scheffler, Klaus
2018-03-01
Chemical exchange saturation transfer (CEST) NMR or MRI experiments allow detection of low concentrated molecules with enhanced sensitivity via their proton exchange with the abundant water pool. Be it endogenous metabolites or exogenous contrast agents, an exact quantification of the actual exchange rate is required to design optimal pulse sequences and/or specific sensitive agents. Refined analytical expressions allow deeper insight and improvement of accuracy for common quantification techniques. The accuracy of standard quantification methodologies, such as quantification of exchange rate using varying saturation power or varying saturation time, is improved especially for the case of nonequilibrium initial conditions and weak labeling conditions, meaning the saturation amplitude is smaller than the exchange rate (γB 1 exchange rate using varying saturation power/time' (QUESP/QUEST) equations allow for more accurate exchange rate determination, and provide clear insights on the general principles to execute the experiments and to perform numerical evaluation. The proposed methodology was evaluated on the large-shift regime of paramagnetic chemical-exchange-saturation-transfer agents using simulated data and data of the paramagnetic Eu(III) complex of DOTA-tetraglycineamide. The refined formulas yield improved exchange rate estimation. General convergence intervals of the methods that would apply for smaller shift agents are also discussed. Magn Reson Med 79:1708-1721, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Kalman filter data assimilation: Targeting observations and parameter estimation
International Nuclear Information System (INIS)
Bellsky, Thomas; Kostelich, Eric J.; Mahalov, Alex
2014-01-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation
How accurate are forecasts of costs of energy? A methodological contribution
International Nuclear Information System (INIS)
Siddons, Craig; Allan, Grant; McIntyre, Stuart
2015-01-01
Forecasts of the cost of energy are typically presented as point estimates; however forecasts are seldom accurate, which makes it important to understand the uncertainty around these point estimates. The scale of the differences between forecasts and outturns (i.e. contemporary estimates) of costs may have important implications for government decisions on the appropriate form (and level) of support, modelling energy scenarios or industry investment appraisal. This paper proposes a methodology to assess the accuracy of cost forecasts. We apply this to levelised costs of energy for different generation technologies due to the availability of comparable forecasts and contemporary estimates, however the same methodology could be applied to the components of levelised costs, such as capital costs. The estimated “forecast errors” capture the accuracy of previous forecasts and can provide objective bounds to the range around current forecasts for such costs. The results from applying this method are illustrated using publicly available data for on- and off-shore wind, Nuclear and CCGT technologies, revealing the possible scale of “forecast errors” for these technologies. - Highlights: • A methodology to assess the accuracy of forecasts of costs of energy is outlined. • Method applied to illustrative data for four electricity generation technologies. • Results give an objective basis for sensitivity analysis around point estimates.
Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture
Directory of Open Access Journals (Sweden)
Zhiquan Gao
2015-09-01
Full Text Available Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain.
Cost Calculation Model for Logistics Service Providers
Directory of Open Access Journals (Sweden)
Zoltán Bokor
2012-11-01
Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly
Accurate deuterium spectroscopy for fundamental studies
Wcisło, P.; Thibault, F.; Zaborowski, M.; Wójtewicz, S.; Cygan, A.; Kowzan, G.; Masłowski, P.; Komasa, J.; Puchalski, M.; Pachucki, K.; Ciuryło, R.; Lisak, D.
2018-07-01
We present an accurate measurement of the weak quadrupole S(2) 2-0 line in self-perturbed D2 and theoretical ab initio calculations of both collisional line-shape effects and energy of this rovibrational transition. The spectra were collected at the 247-984 Torr pressure range with a frequency-stabilized cavity ring-down spectrometer linked to an optical frequency comb (OFC) referenced to a primary time standard. Our line-shape modeling employed quantum calculations of molecular scattering (the pressure broadening and shift and their speed dependencies were calculated, while the complex frequency of optical velocity-changing collisions was fitted to experimental spectra). The velocity-changing collisions are handled with the hard-sphere collisional kernel. The experimental and theoretical pressure broadening and shift are consistent within 5% and 27%, respectively (the discrepancy for shift is 8% when referred not to the speed averaged value, which is close to zero, but to the range of variability of the speed-dependent shift). We use our high pressure measurement to determine the energy, ν0, of the S(2) 2-0 transition. The ab initio line-shape calculations allowed us to mitigate the expected collisional systematics reaching the 410 kHz accuracy of ν0. We report theoretical determination of ν0 taking into account relativistic and QED corrections up to α5. Our estimation of the accuracy of the theoretical ν0 is 1.3 MHz. We observe 3.4σ discrepancy between experimental and theoretical ν0.
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
Accurate Online Full Charge Capacity Modeling of Smartphone Batteries
Hoque, Mohammad A.; Siekkinen, Matti; Koo, Jonghoe; Tarkoma, Sasu
2016-01-01
Full charge capacity (FCC) refers to the amount of energy a battery can hold. It is the fundamental property of smartphone batteries that diminishes as the battery ages and is charged/discharged. We investigate the behavior of smartphone batteries while charging and demonstrate that the battery voltage and charging rate information can together characterize the FCC of a battery. We propose a new method for accurately estimating FCC without exposing low-level system details or introducing new ...
Mitigating Provider Uncertainty in Service Provision Contracts
Smith, Chris; van Moorsel, Aad
Uncertainty is an inherent property of open, distributed and multiparty systems. The viability of the mutually beneficial relationships which motivate these systems relies on rational decision-making by each constituent party under uncertainty. Service provision in distributed systems is one such relationship. Uncertainty is experienced by the service provider in his ability to deliver a service with selected quality level guarantees due to inherent non-determinism, such as load fluctuations and hardware failures. Statistical estimators utilized to model this non-determinism introduce additional uncertainty through sampling error. Inability of the provider to accurately model and analyze uncertainty in the quality level guarantees can result in the formation of sub-optimal service provision contracts. Emblematic consequences include loss of revenue, inefficient resource utilization and erosion of reputation and consumer trust. We propose a utility model for contract-based service provision to provide a systematic approach to optimal service provision contract formation under uncertainty. Performance prediction methods to enable the derivation of statistical estimators for quality level are introduced, with analysis of their resultant accuracy and cost.
Photogrammetry and Laser Imagery Tests for Tank Waste Volume Estimates: Summary Report
Energy Technology Data Exchange (ETDEWEB)
Field, Jim G. [Washington River Protection Solutions, LLC, Richland, WA (United States)
2013-03-27
Feasibility tests were conducted using photogrammetry and laser technologies to estimate the volume of waste in a tank. These technologies were compared with video Camera/CAD Modeling System (CCMS) estimates; the current method used for post-retrieval waste volume estimates. This report summarizes test results and presents recommendations for further development and deployment of technologies to provide more accurate and faster waste volume estimates in support of tank retrieval and closure.
Photogrammetry and Laser Imagery Tests for Tank Waste Volume Estimates: Summary Report
International Nuclear Information System (INIS)
Field, Jim G.
2013-01-01
Feasibility tests were conducted using photogrammetry and laser technologies to estimate the volume of waste in a tank. These technologies were compared with video Camera/CAD Modeling System (CCMS) estimates; the current method used for post-retrieval waste volume estimates. This report summarizes test results and presents recommendations for further development and deployment of technologies to provide more accurate and faster waste volume estimates in support of tank retrieval and closure
Scribbans, T D; Berg, K; Narazaki, K; Janssen, I; Gurd, B J
2015-09-01
There is currently little information regarding the ability of metabolic prediction equations to accurately predict oxygen uptake and exercise intensity from heart rate (HR) during intermittent sport. The purpose of the present study was to develop and, cross-validate equations appropriate for accurately predicting oxygen cost (VO2) and energy expenditure from HR during intermittent sport participation. Eleven healthy adult males (19.9±1.1yrs) were recruited to establish the relationship between %VO2peak and %HRmax during low-intensity steady state endurance (END), moderate-intensity interval (MOD) and high intensity-interval exercise (HI), as performed on a cycle ergometer. Three equations (END, MOD, and HI) for predicting %VO2peak based on %HRmax were developed. HR and VO2 were directly measured during basketball games (6 male, 20.8±1.0 yrs; 6 female, 20.0±1.3yrs) and volleyball drills (12 female; 20.8±1.0yrs). Comparisons were made between measured and predicted VO2 and energy expenditure using the 3 equations developed and 2 previously published equations. The END and MOD equations accurately predicted VO2 and energy expenditure, while the HI equation underestimated, and the previously published equations systematically overestimated VO2 and energy expenditure. Intermittent sport VO2 and energy expenditure can be accurately predicted from heart rate data using either the END (%VO2peak=%HRmax x 1.008-17.17) or MOD (%VO2peak=%HRmax x 1.2-32) equations. These 2 simple equations provide an accessible and cost-effective method for accurate estimation of exercise intensity and energy expenditure during intermittent sport.
Dingari, Narahara Chari; Kang, Jeon Woong; Dasari, Ramachandra R.; Barman, Ishan; Horowitz, Gary Leigh
2012-01-01
We present the first demonstration of glycated albumin detection and quantification using Raman spectroscopy without the addition of reagents. Glycated albumin is an important marker for monitoring the long-term glycemic history of diabetics, especially as its concentrations, in contrast to glycated hemoglobin levels, are unaffected by changes in erythrocyte life times. Clinically, glycated albumin concentrations show a strong correlation with the development of serious diabetes complications...
Fiber diffraction of skin and nails provides an accurate diagnosis of malignancies
International Nuclear Information System (INIS)
James, Veronica J.
2009-01-01
An early diagnosis of malignancies correlates directly with a better prognosis. Yet for many malignancies there are no readily available, noninvasive, cost-effective diagnostic tests with patients often presenting too late for effective treatment. This article describes for the first time the use of fiber diffraction patterns of skin or fingernails, using X-ray sources, as a biometric diagnostic method for detecting neoplastic disorders including but not limited to melanoma, breast, colon and prostate cancers. With suitable further development, an early low-cost, totally noninvasive yet reliable diagnostic test could be conducted on a regular basis in local radiology facilities, as a confirmatory test for other diagnostic procedures or as a mass screening test using suitable small angle X-ray beam-lines at synchrotrons.
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
A Simple and Accurate Method for Measuring Enzyme Activity.
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
Novel multi-beam radiometers for accurate ocean surveillance
DEFF Research Database (Denmark)
Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.
2014-01-01
Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...
The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.
Kaskowitz, Gary S.; De Ayala, R. J.
2001-01-01
Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…
Battery electric vehicle energy consumption modelling for range estimation
Wang, J.; Besselink, I.J.M.; Nijmeijer, H.
2017-01-01
Range anxiety is considered as one of the major barriers to the mass adoption of battery electric vehicles (BEVs). One method to solve this problem is to provide accurate range estimation to the driver. This paper describes a vehicle energy consumption model considering the influence of weather
Hirschmann, J; Schoffelen, J M; Schnitzler, A; van Gerven, M A J
2017-10-01
To investigate the possibility of tremor detection based on deep brain activity. We re-analyzed recordings of local field potentials (LFPs) from the subthalamic nucleus in 10 PD patients (12 body sides) with spontaneously fluctuating rest tremor. Power in several frequency bands was estimated and used as input to Hidden Markov Models (HMMs) which classified short data segments as either tremor-free rest or rest tremor. HMMs were compared to direct threshold application to individual power features. Applying a threshold directly to band-limited power was insufficient for tremor detection (mean area under the curve [AUC] of receiver operating characteristic: 0.64, STD: 0.19). Multi-feature HMMs, in contrast, allowed for accurate detection (mean AUC: 0.82, STD: 0.15), using four power features obtained from a single contact pair. Within-patient training yielded better accuracy than across-patient training (0.84vs. 0.78, p=0.03), yet tremor could often be detected accurately with either approach. High frequency oscillations (>200Hz) were the best performing individual feature. LFP-based markers of tremor are robust enough to allow for accurate tremor detection in short data segments, provided that appropriate statistical models are used. LFP-based markers of tremor could be useful control signals for closed-loop deep brain stimulation. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Fast and accurate computation of projected two-point functions
Grasshorn Gebhardt, Henry S.; Jeong, Donghui
2018-01-01
We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.
Adaptive vehicle motion estimation and prediction
Zhao, Liang; Thorpe, Chuck E.
1999-01-01
Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.
Improved management of radiotherapy departments through accurate cost data
International Nuclear Information System (INIS)
Kesteloot, K.; Lievens, Y.; Schueren, E. van der
2000-01-01
Escalating health care expenses urge Governments towards cost containment. More accurate data on the precise costs of health care interventions are needed. We performed an aggregate cost calculation of radiation therapy departments and treatments and discussed the different cost components. The costs of a radiotherapy department were estimated, based on accreditation norms for radiotherapy departments set forth in the Belgian legislation. The major cost components of radiotherapy are the cost of buildings and facilities, equipment, medical and non-medical staff, materials and overhead. They respectively represent around 3, 30, 50, 4 and 13% of the total costs, irrespective of the department size. The average cost per patient lowers with increasing department size and optimal utilization of resources. Radiotherapy treatment costs vary in a stepwise fashion: minor variations of patient load do not affect the cost picture significantly due to a small impact of variable costs. With larger increases in patient load however, additional equipment and/or staff will become necessary, resulting in additional semi-fixed costs and an important increase in costs. A sensitivity analysis of these two major cost inputs shows that a decrease in total costs of 12-13% can be obtained by assuming a 20% less than full time availability of personnel; that due to evolving seniority levels, the annual increase in wage costs is estimated to be more than 1%; that by changing the clinical life-time of buildings and equipment with unchanged interest rate, a 5% reduction of total costs and cost per patient can be calculated. More sophisticated equipment will not have a very large impact on the cost (±4000 BEF/patient), provided that the additional equipment is adapted to the size of the department. That the recommendations we used, based on the Belgian legislation, are not outrageous is shown by replacing them by the USA Blue book recommendations. Depending on the department size, costs in
Bonicelli, Andrea; Xhemali, Bledar; Kranioti, Elena F.
2017-01-01
Age estimation remains one of the most challenging tasks in forensic practice when establishing a biological profile of unknown skeletonised remains. Morphological methods based on developmental markers of bones can provide accurate age estimates at a young age, but become highly unreliable for ages over 35 when all developmental markers disappear. This study explores the changes in the biomechanical properties of bone tissue and matrix, which continue to change with age even after skeletal maturity, and their potential value for age estimation. As a proof of concept we investigated the relationship of 28 variables at the macroscopic and microscopic level in rib autopsy samples from 24 individuals. Stepwise regression analysis produced a number of equations one of which with seven variables showed an R2 = 0.949; a mean residual error of 2.13 yrs ±0.4 (SD) and a maximum residual error value of 2.88 yrs. For forensic purposes, by using only bench top machines in tests which can be carried out within 36 hrs, a set of just 3 variables produced an equation with an R2 = 0.902 a mean residual error of 3.38 yrs ±2.6 (SD) and a maximum observed residual error 9.26yrs. This method outstrips all existing age-at-death methods based on ribs, thus providing a novel lab based accurate tool in the forensic investigation of human remains. The present application is optimised for fresh (uncompromised by taphonomic conditions) remains, but the potential of the principle and method is vast once the trends of the biomechanical variables are established for other environmental conditions and circumstances. PMID:28520764
Exploring the relationship between sequence similarity and accurate phylogenetic trees.
Cantarel, Brandi L; Morrison, Hilary G; Pearson, William
2006-11-01
significantly decrease phylogenetic accuracy. In general, although less-divergent sequence families produce more accurate trees, the likelihood of estimating an accurate tree is most dependent on whether radiation in the family was ancient or recent. Accuracy can be improved by combining genes from the same organism when creating species trees or by selecting protein families with the best bootstrap values in comprehensive studies.
More accurate picture of human body organs
International Nuclear Information System (INIS)
Kolar, J.
1985-01-01
Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency
Accurate activity recognition in a home setting
van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B.
2008-01-01
A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its
Highly accurate surface maps from profilometer measurements
Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.
2013-04-01
Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.
Heskes, Tom; Eisinga, Rob; Breitling, Rainer
2014-11-21
The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .
How many standard area diagram sets are needed for accurate disease severity assessment
Standard area diagram sets (SADs) are widely used in plant pathology: a rater estimates disease severity by comparing an unknown sample to actual severities in the SADs and interpolates an estimate as accurately as possible (although some SADs have been developed for categorizing disease too). Most ...
Burke, Gary; Nesheiwat, Jeffrey; Su, Ling
1994-01-01
Verification is important aspect of process of designing application-specific integrated circuit (ASIC). Design must not only be functionally accurate, but must also maintain correct timing. IFA, Intelligent Front Annotation program, assists in verifying timing of ASIC early in design process. This program speeds design-and-verification cycle by estimating delays before layouts completed. Written in C language.
Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement
1999-06-01
estimate of the age of the universe. In order to do this, you need an unambiguous, absolute distance to another galaxy. We are pleased that the NSF's VLBA has for the first time determined such a distance, and thus provided the calibration standard astronomers have always sought in their quest for accurate distances beyond the Milky Way," said Morris Aizenman, Executive Officer of the National Science Foundation's (NSF) Division of Astronomical Sciences. "For astronomers, this measurement is the golden meter stick in the glass case," Aizenman added. The international team of astronomers used the VLBA to measure directly the motion of gas orbiting what is generally agreed to be a supermassive black hole at the heart of NGC 4258. The orbiting gas forms a warped disk, nearly two light-years in diameter, surrounding the black hole. The gas in the disk includes water vapor, which, in parts of the disk, acts as a natural amplifier of microwave radio emission. The regions that amplify radio emission are called masers, and work in a manner similar to the way a laser amplifies light emission. Determining the distance to NGC 4258 required measuring motions of extremely small shifts in position of these masers as they rotate around the black hole. This is equivalent to measuring an angle one ten-thousandth the width of a human hair held at arm's length. "The VLBA is the only instrument in the world that could do this," said Moran. "This work is the culmination of a 20-year effort at the Harvard Smithsonian Center for Astrophysics to measure distances to cosmic masers," said Irwin Shapiro, Director of that institution. Collection of the data for the NGC 4258 project was begun in 1994 and was part of Herrnstein's Ph.D dissertation at Harvard University. Previous observations with the VLBA allowed the scientists to measure the speed at which the gas is orbiting the black hole, some 39 million times more massive than the Sun. They did this by observing the amount of change in the
Accurate guitar tuning by cochlear implant musicians.
Directory of Open Access Journals (Sweden)
Thomas Lu
Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
On accurate determination of contact angle
Concus, P.; Finn, R.
1992-01-01
Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.
Accurate multiplicity scaling in isotopically conjugate reactions
International Nuclear Information System (INIS)
Golokhvastov, A.I.
1989-01-01
The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs
PET motion correction using PRESTO with ITK motion estimation
Energy Technology Data Exchange (ETDEWEB)
Botelho, Melissa [Institute of Biophysics and Biomedical Engineering, Science Faculty of University of Lisbon (Portugal); Caldeira, Liliana; Scheins, Juergen [Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich (Germany); Matela, Nuno [Institute of Biophysics and Biomedical Engineering, Science Faculty of University of Lisbon (Portugal); Kops, Elena Rota; Shah, N Jon [Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich (Germany)
2014-07-29
The Siemens BrainPET scanner is a hybrid MRI/PET system. PET images are prone to motion artefacts which degrade the image quality. Therefore, motion correction is essential. The library PRESTO converts motion-corrected LORs into highly accurate generic projection data [1], providing high-resolution PET images. ITK is an open-source software used for registering multidimensional data []. ITK provides motion estimation necessary to PRESTO.
PET motion correction using PRESTO with ITK motion estimation
International Nuclear Information System (INIS)
Botelho, Melissa; Caldeira, Liliana; Scheins, Juergen; Matela, Nuno; Kops, Elena Rota; Shah, N Jon
2014-01-01
The Siemens BrainPET scanner is a hybrid MRI/PET system. PET images are prone to motion artefacts which degrade the image quality. Therefore, motion correction is essential. The library PRESTO converts motion-corrected LORs into highly accurate generic projection data [1], providing high-resolution PET images. ITK is an open-source software used for registering multidimensional data []. ITK provides motion estimation necessary to PRESTO.
Mental models accurately predict emotion transitions.
Thornton, Mark A; Tamir, Diana I
2017-06-06
Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.
Mental models accurately predict emotion transitions
Thornton, Mark A.; Tamir, Diana I.
2017-01-01
Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373
Accurate millimetre and submillimetre rest frequencies for cis- and trans-dithioformic acid, HCSSH
Prudenzano, D.; Laas, J.; Bizzocchi, L.; Lattanzi, V.; Endres, C.; Giuliano, B. M.; Spezzano, S.; Palumbo, M. E.; Caselli, P.
2018-04-01
Context. A better understanding of sulphur chemistry is needed to solve the interstellar sulphur depletion problem. A way to achieve this goal is to study new S-bearing molecules in the laboratory, obtaining accurate rest frequencies for an astronomical search. We focus on dithioformic acid, HCSSH, which is the sulphur analogue of formic acid. Aims: The aim of this study is to provide an accurate line list of the two HCSSH trans and cis isomers in their electronic ground state and a comprehensive centrifugal distortion analysis with an extension of measurements in the millimetre and submillimetre range. Methods: We studied the two isomers in the laboratory using an absorption spectrometer employing the frequency-modulation technique. The molecules were produced directly within a free-space cell by glow discharge of a gas mixture. We measured lines belonging to the electronic ground state up to 478 GHz, with a total number of 204 and 139 new rotational transitions, respectively, for trans and cis isomers. The final dataset also includes lines in the centimetre range available from literature. Results: The extension of the measurements in the mm and submm range lead to an accurate set of rotational and centrifugal distortion parameters. This allows us to predict frequencies with estimated uncertainties as low as 5 kHz at 1 mm wavelength. Hence, the new dataset provided by this study can be used for astronomical search. Frequency lists are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/612/A56
Veronika Leitold; Michael Keller; Douglas C Morton; Bruce D Cook; Yosio E Shimabukuro
2015-01-01
Background: Carbon stocks and fluxes in tropical forests remain large sources of uncertainty in the global carbon budget. Airborne lidar remote sensing is a powerful tool for estimating aboveground biomass, provided that lidar measurements penetrate dense forest vegetation to generate accurate estimates of surface topography and canopy heights. Tropical forest areas...
Cost Estimating Handbook for Environmental Restoration
International Nuclear Information System (INIS)
1993-01-01
Environmental restoration (ER) projects have presented the DOE and cost estimators with a number of properties that are not comparable to the normal estimating climate within DOE. These properties include: An entirely new set of specialized expressions and terminology. A higher than normal exposure to cost and schedule risk, as compared to most other DOE projects, due to changing regulations, public involvement, resource shortages, and scope of work. A higher than normal percentage of indirect costs to the total estimated cost due primarily to record keeping, special training, liability, and indemnification. More than one estimate for a project, particularly in the assessment phase, in order to provide input into the evaluation of alternatives for the cleanup action. While some aspects of existing guidance for cost estimators will be applicable to environmental restoration projects, some components of the present guidelines will have to be modified to reflect the unique elements of these projects. The purpose of this Handbook is to assist cost estimators in the preparation of environmental restoration estimates for Environmental Restoration and Waste Management (EM) projects undertaken by DOE. The DOE has, in recent years, seen a significant increase in the number, size, and frequency of environmental restoration projects that must be costed by the various DOE offices. The coming years will show the EM program to be the largest non-weapons program undertaken by DOE. These projects create new and unique estimating requirements since historical cost and estimating precedents are meager at best. It is anticipated that this Handbook will enhance the quality of cost data within DOE in several ways by providing: The basis for accurate, consistent, and traceable baselines. Sound methodologies, guidelines, and estimating formats. Sources of cost data/databases and estimating tools and techniques available at DOE cost professionals
AtomDB: Expanding an Accessible and Accurate Atomic Database for X-ray Astronomy
Smith, Randall
Since its inception in 2001, the AtomDB has become the standard repository of accurate and accessible atomic data for the X-ray astrophysics community, including laboratory astrophysicists, observers, and modelers. Modern calculations of collisional excitation rates now exist - and are in AtomDB - for all abundant ions in a hot plasma. AtomDB has expanded beyond providing just a collisional model, and now also contains photoionization data from XSTAR as well as a charge exchange model, amongst others. However, building and maintaining an accurate and complete database that can fully exploit the diagnostic potential of high-resolution X-ray spectra requires further work. The Hitomi results, sadly limited as they were, demonstrated the urgent need for the best possible wavelength and rate data, not merely for the strongest lines but for the diagnostic features that may have 1% or less of the flux of the strong lines. In particular, incorporation of weak but powerfully diagnostic satellite lines will be crucial to understanding the spectra expected from upcoming deep observations with Chandra and XMM-Newton, as well as the XARM and Athena satellites. Beyond incorporating this new data, a number of groups, both experimental and theoretical, have begun to produce data with errors and/or sensitivity estimates. We plan to use this to create statistically meaningful spectral errors on collisional plasmas, providing practical uncertainties together with model spectra. We propose to continue to (1) engage the X-ray astrophysics community regarding their issues and needs, notably by a critical comparison with other related databases and tools, (2) enhance AtomDB to incorporate a large number of satellite lines as well as updated wavelengths with error estimates, (3) continue to update the AtomDB with the latest calculations and laboratory measurements, in particular velocity-dependent charge exchange rates, and (4) enhance existing tools, and create new ones as needed to
A practical method of estimating stature of bedridden female nursing home patients.
Muncie, H L; Sobal, J; Hoopes, J M; Tenney, J H; Warren, J W
1987-04-01
Accurate measurement of stature is important for the determination of several nutritional indices as well as body surface area (BSA) for the normalization of creatinine clearances. Direct standing measurement of stature of bedridden elderly nursing home patients is impossible, and stature as recorded in the chart may not be valid. An accurate stature obtained by summing five segmental measurements was compared to the stature recorded in the patient's chart and calculated estimates of stature from measurement of a long bone (humerus, tibia, knee height). Estimation of stature from measurement of knee height was highly correlated (r = 0.93) to the segmental measurement of stature while estimates from other long-bone measurements were less highly correlated (r = 0.71 to 0.81). Recorded chart stature was poorly correlated (r = 0.37). Measurement of knee height provides a simple, quick, and accurate means of estimating stature for bedridden females in nursing homes.
An accurate determination of the flux within a slab
International Nuclear Information System (INIS)
Ganapol, B.D.; Lapenta, G.
1993-01-01
During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available
International Nuclear Information System (INIS)
Nakata, Kotaro; Hasegawa, Takuma
2010-01-01
36 Cl is one of the most powerful tools to estimate the residence time of groundwater about 300-1800 thousand years. AMS(Accelerator Mass Spectroscopy) can provide accurate estimation of 36 Cl. However, estimation of 36 Cl by AMS is usually disturbed by isobar such as 36 S. Thus, separation of Cl (usually Cl - form in groundwater) and S (usually SO 4 -2 form in groundwater) is required for accurate estimation of 36 Cl. In previous studies, a methodology (BaSO 4 Method) that uses the difference in solubility between BaSO 4 and BaCl 2 , had been applied as pretreatment method for 36 Cl estimation by AMS. However BaSO 4 Method has following disadvantages; (1) Cl and SO 4 can not be separated completely, (2) accuracy of separation depends on the skills of operator, (3) it takes a long time for treatment, (4) it can not be applied to dilute solutions. Therefore, new methodology that can overcome disadvantages of BaSO 4 method is required for more accurate estimation of 36 Cl. In this study, Column Method based on column chromatography was investigated for pretreatment method for 36 Cl estimation by AMS to separate Cl and SO 4 ions. The conditions for Column Method were determined and adjusted so that Cl and SO 4 ions were separated completely and sufficient amount of Cl for 36 Cl estimation can be treated. The results of AMS measurement showed Column Method can remove SO 4 from Cl more effectively comparing with BaSO 4 method. Furthermore, Column Method was found to have following advantages over BaSO 4 Method; (1) dependence of accuracy of separation on the skills of operator is quite low, (2) treatment can be done within 6 h, (3) it can be applied to dilute solutions. (author)
Stokes, Elizabeth A; Wordsworth, Sarah; Staves, Julie; Mundy, Nicola; Skelly, Jane; Radford, Kelly; Stanworth, Simon J
2018-04-01
In an environment of limited health care resources, it is crucial for health care systems which provide blood transfusion to have accurate and comprehensive information on the costs of transfusion, incorporating not only the costs of blood products, but also their administration. Unfortunately, in many countries accurate costs for administering blood are not available. Our study aimed to generate comprehensive estimates of the costs of administering transfusions for the UK National Health Service. A detailed microcosting study was used to cost two key inputs into transfusion: transfusion laboratory and nursing inputs. For each input, data collection forms were developed to capture staff time, equipment, and consumables associated with each step in the transfusion process. Costing results were combined with costs of blood product wastage to calculate the cost per unit transfused, separately for different blood products. Data were collected in 2014/15 British pounds and converted to US dollars. A total of 438 data collection forms were completed by 74 staff. The cost of administering blood was $71 (£49) per unit for red blood cells, $84 (£58) for platelets, $55 (£38) for fresh-frozen plasma, and $72 (£49) for cryoprecipitate. Blood administration costs add substantially to the costs of the blood products themselves. These are frequently incurred costs; applying estimates to the blood components supplied to UK hospitals in 2015, the annual cost of blood administration, excluding blood products, exceeds $175 (£120) million. These results provide more accurate estimates of the total costs of transfusion than those previously available. © 2018 AABB.
The first accurate description of an aurora
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Accurate Charge Densities from Powder Diffraction
DEFF Research Database (Denmark)
Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob
Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...
Arbitrarily accurate twin composite π -pulse sequences
Torosov, Boyan T.; Vitanov, Nikolay V.
2018-04-01
We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .
Systematization of Accurate Discrete Optimization Methods
Directory of Open Access Journals (Sweden)
V. A. Ovchinnikov
2015-01-01
Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.
A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity
Directory of Open Access Journals (Sweden)
C. C. Cui
2017-01-01
Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.
Cross-property relations and permeability estimation in model porous media
International Nuclear Information System (INIS)
Schwartz, L.M.; Martys, N.; Bentz, D.P.; Garboczi, E.J.; Torquato, S.
1993-01-01
Results from a numerical study examining cross-property relations linking fluid permeability to diffusive and electrical properties are presented. Numerical solutions of the Stokes equations in three-dimensional consolidated granular packings are employed to provide a basis of comparison between different permeability estimates. Estimates based on the Λ parameter (a length derived from electrical conduction) and on d c (a length derived from immiscible displacement) are found to be considerably more reliable than estimates based on rigorous permeability bounds related to pore space diffusion. We propose two hybrid relations based on diffusion which provide more accurate estimates than either of the rigorous permeability bounds
Can Selforganizing Maps Accurately Predict Photometric Redshifts?
Way, Michael J.; Klose, Christian
2012-01-01
We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods
Onboard Autonomous Corrections for Accurate IRF Pointing.
Jorgensen, J. L.; Betto, M.; Denver, T.
2002-05-01
Over the past decade, the Noise Equivalent Angle (NEA) of onboard attitude reference instruments, has decreased from tens-of-arcseconds to the sub-arcsecond level. This improved performance is partly due to improved sensor-technology with enhanced signal to noise ratios, partly due to improved processing electronics which allows for more sophisticated and faster signal processing. However, the main reason for the increased precision, is the application of onboard autonomy, which apart from simple outlier rejection also allows for removal of "false positive" answers, and other "unexpected" noise sources, that otherwise would degrade the quality of the measurements (e.g. discrimination between signals caused by starlight and ionizing radiation). The utilization of autonomous signal processing has also provided the means for another onboard processing step, namely the autonomous recovery from lost in space, where the attitude instrument without a priori knowledge derive the absolute attitude, i.e. in IRF coordinates, within fractions of a second. Combined with precise orbital state or position data, the absolute attitude information opens for multiple ways to improve the mission performance, either by reducing operations costs, by increasing pointing accuracy, by reducing mission expendables, or by providing backup decision information in case of anomalies. The Advanced Stellar Compass's (ASC) is a miniature, high accuracy, attitude instrument which features fully autonomous operations. The autonomy encompass all direct steps from automatic health checkout at power-on, over fully automatic SEU and SEL handling and proton induced sparkle removal, to recovery from "lost in space", and optical disturbance detection and handling. But apart from these more obvious autonomy functions, the ASC also features functions to handle and remove the aforementioned residuals. These functions encompass diverse operators such as a full orbital state vector model with automatic cloud
Russo, Laura; Park, Mia; Gibbs, Jason; Danforth, Bryan
2015-01-01
Bees are important pollinators of agricultural crops, and bee diversity has been shown to be closely associated with pollination, a valuable ecosystem service. Higher functional diversity and species richness of bees have been shown to lead to higher crop yield. Bees simultaneously represent a mega-diverse taxon that is extremely challenging to sample thoroughly and an important group to understand because of pollination services. We sampled bees visiting apple blossoms in 28 orchards over 6 years. We used species rarefaction analyses to test for the completeness of sampling and the relationship between species richness and sampling effort, orchard size, and percent agriculture in the surrounding landscape. We performed more than 190 h of sampling, collecting 11,219 specimens representing 104 species. Despite the sampling intensity, we captured wild bees did not appear to be a factor, as we found no correlation between honeybee and wild bee abundance. Our study shows that the pollinator fauna of agroecosystems can be diverse and challenging to thoroughly sample. We demonstrate that there is high temporal variation in community composition and that sites vary widely in the sampling effort required to fully describe their diversity. In order to maximize pollination services provided by wild bee species, we must first accurately estimate species richness. For researchers interested in providing this estimate, we recommend multiyear studies and rarefaction analyses to quantify the gap between observed and expected species richness. PMID:26380684
Accurate shear measurement with faint sources
International Nuclear Information System (INIS)
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys
How Accurately can we Calculate Thermal Systems?
International Nuclear Information System (INIS)
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-01-01
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K eff , for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors
How accurately can the peak skin dose in fluoroscopy be determined using indirect dose metrics?
International Nuclear Information System (INIS)
Jones, A. Kyle; Ensor, Joe E.; Pasciak, Alexander S.
2014-01-01
beam computed tomography or acquisition runs acquired at large primary gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±35% for embolization procedures. Reference air kerma can be used without modification to set notification limits and substantial radiation dose levels, provided the displayed reference air kerma is accurate. These results can reasonably be extended to similar procedures, including vascular and interventional oncology. Considering these results, film dosimetry is likely an unnecessary effort for these types of procedures when indirect dose metrics are available
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-04-11
Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.
Directory of Open Access Journals (Sweden)
Xuemiao Xu
2016-04-01
Full Text Available Exterior orientation parameters’ (EOP estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model.
Improving Estimation Accuracy of Aggregate Queries on Data Cubes
Energy Technology Data Exchange (ETDEWEB)
Pourabbas, Elaheh; Shoshani, Arie
2008-08-15
In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.
Highway travel time estimation with data fusion
Soriguera Martí, Francesc
2016-01-01
This monograph presents a simple, innovative approach for the measurement and short-term prediction of highway travel times based on the fusion of inductive loop detector and toll ticket data. The methodology is generic and not technologically captive, allowing it to be easily generalized for other equivalent types of data. The book shows how Bayesian analysis can be used to obtain fused estimates that are more reliable than the original inputs, overcoming some of the drawbacks of travel-time estimations based on unique data sources. The developed methodology adds value and obtains the maximum (in terms of travel time estimation) from the available data, without recurrent and costly requirements for additional data. The application of the algorithms to empirical testing in the AP-7 toll highway in Barcelona proves that it is possible to develop an accurate real-time, travel-time information system on closed-toll highways with the existing surveillance equipment, suggesting that highway operators might provide...
Accurate determination of rates from non-uniformly sampled relaxation data
Energy Technology Data Exchange (ETDEWEB)
Stetz, Matthew A.; Wand, A. Joshua, E-mail: wand@upenn.edu [University of Pennsylvania Perelman School of Medicine, Johnson Research Foundation and Department of Biochemistry and Biophysics (United States)
2016-08-15
The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6
Multistage feature extraction for accurate face alignment
Zuo, F.; With, de P.H.N.
2004-01-01
We propose a novel multistage facial feature extraction approach using a combination of 'global' and 'local' techniques. At the first stage, we use template matching, based on an Edge-Orientation-Map for fast feature position estimation. Using this result, a statistical framework applying the Active
DNA barcode data accurately assign higher spider taxa
Directory of Open Access Journals (Sweden)
Jonathan A. Coddington
2016-07-01
, the quality of the underlying database impacts accuracy of results; many outliers in our dataset could be attributed to taxonomic and/or sequencing errors in BOLD and GenBank. It seems that an accurate and complete reference library of families and genera of life could provide accurate higher level taxonomic identifications cheaply and accessibly, within years rather than decades.
Indexed variation graphs for efficient and accurate resistome profiling.
Rowe, Will P M; Winn, Martyn D
2018-05-14
Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
An accurate nonlinear Monte Carlo collision operator
International Nuclear Information System (INIS)
Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.
1995-03-01
A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Accurate Modeling Method for Cu Interconnect
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
Estimating the re-identification risk of clinical data sets
Directory of Open Access Journals (Sweden)
Dankar Fida
2012-07-01
Full Text Available Abstract Background De-identification is a common way to protect patient privacy when disclosing clinical data for secondary purposes, such as research. One type of attack that de-identification protects against is linking the disclosed patient data with public and semi-public registries. Uniqueness is a commonly used measure of re-identification risk under this attack. If uniqueness can be measured accurately then the risk from this kind of attack can be managed. In practice, it is often not possible to measure uniqueness directly, therefore it must be estimated. Methods We evaluated the accuracy of uniqueness estimators on clinically relevant data sets. Four candidate estimators were identified because they were evaluated in the past and found to have good accuracy or because they were new and not evaluated comparatively before: the Zayatz estimator, slide negative binomial estimator, Pitman’s estimator, and mu-argus. A Monte Carlo simulation was performed to evaluate the uniqueness estimators on six clinically relevant data sets. We varied the sampling fraction and the uniqueness in the population (the value being estimated. The median relative error and inter-quartile range of the uniqueness estimates was measured across 1000 runs. Results There was no single estimator that performed well across all of the conditions. We developed a decision rule which selected between the Pitman, slide negative binomial and Zayatz estimators depending on the sampling fraction and the difference between estimates. This decision rule had the best consistent median relative error across multiple conditions and data sets. Conclusion This study identified an accurate decision rule that can be used by health privacy researchers and disclosure control professionals to estimate uniqueness in clinical data sets. The decision rule provides a reliable way to measure re-identification risk.
Using In-Service and Coaching to Increase Teachers' Accurate Use of Research-Based Strategies
Kretlow, Allison G.; Cooke, Nancy L.; Wood, Charles L.
2012-01-01
Increasing the accurate use of research-based practices in classrooms is a critical issue. Professional development is one of the most practical ways to provide practicing teachers with training related to research-based practices. This study examined the effects of in-service plus follow-up coaching on first grade teachers' accurate delivery of…
Geometric information provider platform
Directory of Open Access Journals (Sweden)
Meisam Yousefzadeh
2015-07-01
Full Text Available Renovation of existing buildings is known as an essential stage in reduction of the energy loss. Considerable part of renovation process depends on geometric reconstruction of building based on semantic parameters. Following many research projects which were focused on parameterizing the energy usage, various energy modelling methods were developed during the last decade. On the other hand, by developing accurate measuring tools such as laser scanners, the interests of having accurate 3D building models are rapidly growing. But the automation of 3D building generation from laser point cloud or detection of specific objects in that is still a challenge. The goal is designing a platform through which required geometric information can be efficiently produced to support energy simulation software. Developing a reliable procedure which extracts required information from measured data and delivers them to a standard energy modelling system is the main purpose of the project.
Funnel metadynamics as accurate binding free-energy method
Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele
2013-01-01
A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839
AMID: Accurate Magnetic Indoor Localization Using Deep Learning
Directory of Open Access Journals (Sweden)
Namkyoung Lee
2018-05-01
Full Text Available Geomagnetic-based indoor positioning has drawn a great attention from academia and industry due to its advantage of being operable without infrastructure support and its reliable signal characteristics. However, it must overcome the problems of ambiguity that originate with the nature of geomagnetic data. Most studies manage this problem by incorporating particle filters along with inertial sensors. However, they cannot yield reliable positioning results because the inertial sensors in smartphones cannot precisely predict the movement of users. There have been attempts to recognize the magnetic sequence pattern, but these attempts are proven only in a one-dimensional space, because magnetic intensity fluctuates severely with even a slight change of locations. This paper proposes accurate magnetic indoor localization using deep learning (AMID, an indoor positioning system that recognizes magnetic sequence patterns using a deep neural network. Features are extracted from magnetic sequences, and then the deep neural network is used for classifying the sequences by patterns that are generated by nearby magnetic landmarks. Locations are estimated by detecting the landmarks. AMID manifested the proposed features and deep learning as an outstanding classifier, revealing the potential of accurate magnetic positioning with smartphone sensors alone. The landmark detection accuracy was over 80% in a two-dimensional environment.
Accurate Classification of Chronic Migraine via Brain Magnetic Resonance Imaging
Schwedt, Todd J.; Chong, Catherine D.; Wu, Teresa; Gaw, Nathan; Fu, Yinlin; Li, Jing
2015-01-01
Background The International Classification of Headache Disorders provides criteria for the diagnosis and subclassification of migraine. Since there is no objective gold standard by which to test these diagnostic criteria, the criteria are based on the consensus opinion of content experts. Accurate migraine classifiers consisting of brain structural measures could serve as an objective gold standard by which to test and revise diagnostic criteria. The objectives of this study were to utilize magnetic resonance imaging measures of brain structure for constructing classifiers: 1) that accurately identify individuals as having chronic vs. episodic migraine vs. being a healthy control; and 2) that test the currently used threshold of 15 headache days/month for differentiating chronic migraine from episodic migraine. Methods Study participants underwent magnetic resonance imaging for determination of regional cortical thickness, cortical surface area, and volume. Principal components analysis combined structural measurements into principal components accounting for 85% of variability in brain structure. Models consisting of these principal components were developed to achieve the classification objectives. Ten-fold cross validation assessed classification accuracy within each of the ten runs, with data from 90% of participants randomly selected for classifier development and data from the remaining 10% of participants used to test classification performance. Headache frequency thresholds ranging from 5–15 headache days/month were evaluated to determine the threshold allowing for the most accurate subclassification of individuals into lower and higher frequency subgroups. Results Participants were 66 migraineurs and 54 healthy controls, 75.8% female, with an average age of 36 +/− 11 years. Average classifier accuracies were: a) 68% for migraine (episodic + chronic) vs. healthy controls; b) 67.2% for episodic migraine vs. healthy controls; c) 86.3% for chronic
A global algorithm for estimating Absolute Salinity
McDougall, T. J.; Jackett, D. R.; Millero, F. J.; Pawlowicz, R.; Barker, P. M.
2012-12-01
The International Thermodynamic Equation of Seawater - 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density) than does Practical Salinity. When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic), Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg-1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p) in the world ocean. To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811). In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally).
Accurate thermodynamic characterization of a synthetic coal mine methane mixture
International Nuclear Information System (INIS)
Hernández-Gómez, R.; Tuma, D.; Villamañán, M.A.; Mondéjar, M.E.; Chamorro, C.R.
2014-01-01
Highlights: • Accurate density data of a 10 components synthetic coal mine methane mixture are presented. • Experimental data are compared with the densities calculated from the GERG-2008 equation of state. • Relative deviations in density were within a 0.2% band at temperatures above 275 K. • Densities at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations. -- Abstract: In the last few years, coal mine methane (CMM) has gained significance as a potential non-conventional gas fuel. The progressive depletion of common fossil fuels reserves and, on the other hand, the positive estimates of CMM resources as a by-product of mining promote this fuel gas as a promising alternative fuel. The increasing importance of its exploitation makes it necessary to check the capability of the present-day models and equations of state for natural gas to predict the thermophysical properties of gases with a considerably different composition, like CMM. In this work, accurate density measurements of a synthetic CMM mixture are reported in the temperature range from (250 to 400) K and pressures up to 15 MPa, as part of the research project EMRP ENG01 of the European Metrology Research Program for the characterization of non-conventional energy gases. Experimental data were compared with the densities calculated with the GERG-2008 equation of state. Relative deviations between experimental and estimated densities were within a 0.2% band at temperatures above 275 K, while data at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations
Directory of Open Access Journals (Sweden)
Matheus Henrique Nunes
Full Text Available Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.
Maximum-likelihood estimation of recent shared ancestry (ERSA).
Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B
2011-05-01
Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.
Accurate measurements of neutron activation cross sections
International Nuclear Information System (INIS)
Semkova, V.
1999-01-01
The applications of some recent achievements of neutron activation method on high intensity neutron sources are considered from the view point of associated errors of cross sections data for neutron induced reaction. The important corrections in -y-spectrometry insuring precise determination of the induced radioactivity, methods for accurate determination of the energy and flux density of neutrons, produced by different sources, and investigations of deuterium beam composition are considered as factors determining the precision of the experimental data. The influence of the ion beam composition on the mean energy of neutrons has been investigated by measurement of the energy of neutrons induced by different magnetically analysed deuterium ion groups. Zr/Nb method for experimental determination of the neutron energy in the 13-15 MeV energy range allows to measure energy of neutrons from D-T reaction with uncertainty of 50 keV. Flux density spectra from D(d,n) E d = 9.53 MeV and Be(d,n) E d = 9.72 MeV are measured by PHRS and foil activation method. Future applications of the activation method on NG-12 are discussed. (author)
Implicit time accurate simulation of unsteady flow
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
Spectrally accurate initial data in numerical relativity
Battista, Nicholas A.
Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.
A stiffly accurate integrator for elastodynamic problems
Michels, Dominik L.
2017-07-21
We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.
Geodetic analysis of disputed accurate qibla direction
Saksono, Tono; Fulazzaky, Mohamad Ali; Sari, Zamah
2018-04-01
Muslims perform the prayers facing towards the correct qibla direction would be the only one of the practical issues in linking theoretical studies with practice. The concept of facing towards the Kaaba in Mecca during the prayers has long been the source of controversy among the muslim communities to not only in poor and developing countries but also in developed countries. The aims of this study were to analyse the geodetic azimuths of qibla calculated using three different models of the Earth. The use of ellipsoidal model of the Earth could be the best method for determining the accurate direction of Kaaba from anywhere on the Earth's surface. A muslim cannot direct himself towards the qibla correctly if he cannot see the Kaaba due to setting out process and certain motions during the prayer this can significantly shift the qibla direction from the actual position of the Kaaba. The requirement of muslim prayed facing towards the Kaaba is more as spiritual prerequisite rather than physical evidence.
Therapy Provider Phase Information
U.S. Department of Health & Human Services — The Therapy Provider Phase Information dataset is a tool for providers to search by their National Provider Identifier (NPI) number to determine their phase for...
Zhang, Shengzhi; Yu, Shuai; Liu, Chaojun; Liu, Sheng
2016-06-01
Tracking the position of pedestrian is urgently demanded when the most commonly used GPS (Global Position System) is unavailable. Benefited from the small size, low-power consumption, and relatively high reliability, micro-electro-mechanical system sensors are well suited for GPS-denied indoor pedestrian heading estimation. In this paper, a real-time miniature orientation determination system (MODS) was developed for indoor heading and trajectory tracking based on a novel dual-linear Kalman filter. The proposed filter precludes the impact of geomagnetic distortions on pitch and roll that the heading is subjected to. A robust calibration approach was designed to improve the accuracy of sensors measurements based on a unified sensor model. Online tests were performed on the MODS with an improved turntable. The results demonstrate that the average RMSE (root-mean-square error) of heading estimation is less than 1°. Indoor heading experiments were carried out with the MODS mounted on the shoe of pedestrian. Besides, we integrated the existing MODS into an indoor pedestrian dead reckoning application as an example of its utility in realistic actions. A human attitude-based walking model was developed to calculate the walking distance. Test results indicate that mean percentage error of indoor trajectory tracking achieves 2% of the total walking distance. This paper provides a feasible alternative for accurate indoor heading and trajectory tracking.
Parameter Estimation in Stochastic Grey-Box Models
DEFF Research Database (Denmark)
Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay
2004-01-01
An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Can Wearable Devices Accurately Measure Heart Rate Variability? A Systematic Review.
Georgiou, Konstantinos; Larentzakis, Andreas V; Khamis, Nehal N; Alsuhaibani, Ghadah I; Alaska, Yasser A; Giallafos, Elias J
2018-03-01
A growing number of wearable devices claim to provide accurate, cheap and easily applicable heart rate variability (HRV) indices. This is mainly accomplished by using wearable photoplethysmography (PPG) and/or electrocardiography (ECG), through simple and non-invasive techniques, as a substitute of the gold standard RR interval estimation through electrocardiogram. Although the agreement between pulse rate variability (PRV) and HRV has been evaluated in the literature, the reported results are still inconclusive especially when using wearable devices. The purpose of this systematic review is to investigate if wearable devices provide a reliable and precise measurement of classic HRV parameters in rest as well as during exercise. A search strategy was implemented to retrieve relevant articles from MEDLINE and SCOPUS databases, as well as, through internet search. The 308 articles retrieved were reviewed for further evaluation according to the predetermined inclusion/exclusion criteria. Eighteen studies were included. Sixteen of them integrated ECG - HRV technology and two of them PPG - PRV technology. All of them examined wearable devices accuracy in RV detection during rest, while only eight of them during exercise. The correlation between classic ECG derived HRV and the wearable RV ranged from very good to excellent during rest, yet it declined progressively as exercise level increased. Wearable devices may provide a promising alternative solution for measuring RV. However, more robust studies in non-stationary conditions are needed using appropriate methodology in terms of number of subjects involved, acquisition and analysis techniques implied.
Accurately controlled sequential self-folding structures by polystyrene film
Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse
2017-08-01
Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.
Quality metric for accurate overlay control in <20nm nodes
Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki
2013-04-01
The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.
DEFF Research Database (Denmark)
van Hellemond, Irene E. G.; Bouwmeester, Sjoerd; Olson, Charles W.
2011-01-01
a falsely low estimated total MaR if determined by using ST segment-based methods. The purpose of this study was to investigate if consideration of the abnormalities in the QRS complex, in addition to those in the ST segment, provides a more accurate estimated total MaR during anterior AMI than...
Dynamic state estimation and prediction for real-time control and operation
Nguyen, P.H.; Venayagamoorthy, G.K.; Kling, W.L.; Ribeiro, P.F.
2013-01-01
Real-time control and operation are crucial to deal with increasing complexity of modern power systems. To effectively enable those functions, it is required a Dynamic State Estimation (DSE) function to provide accurate network state variables at the right moment and predict their trends ahead. This
Towards Accurate Application Characterization for Exascale (APEX)
Energy Technology Data Exchange (ETDEWEB)
Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
How flatbed scanners upset accurate film dosimetry
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
How flatbed scanners upset accurate film dosimetry
International Nuclear Information System (INIS)
Van Battum, L J; Verdaasdonk, R M; Heukelom, S; Huizenga, H
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2–2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red–green–blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. (paper)
Anatomically accurate, finite model eye for optical modeling.
Liou, H L; Brennan, N A
1997-08-01
There is a need for a schematic eye that models vision accurately under various conditions such as refractive surgical procedures, contact lens and spectacle wear, and near vision. Here we propose a new model eye close to anatomical, biometric, and optical realities. This is a finite model with four aspheric refracting surfaces and a gradient-index lens. It has an equivalent power of 60.35 D and an axial length of 23.95 mm. The new model eye provides spherical aberration values within the limits of empirical results and predicts chromatic aberration for wavelengths between 380 and 750 nm. It provides a model for calculating optical transfer functions and predicting optical performance of the eye.
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
DEFF Research Database (Denmark)
Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens
We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...... as the main programming language, while the necessary parameters together with their correlation matrix are obtained from a SQLite database which has been populated using off-line parameter and error estimation routines (Eq. 3-8)....
Battery Management Systems: Accurate State-of-Charge Indication for Battery-Powered Applications
Pop, V.; Bergveld, H.J.; Danilov, D.; Regtien, Paulus P.L.; Notten, P.H.L.
2008-01-01
Battery Management Systems – Universal State-of-Charge indication for portable applications describes the field of State-of-Charge (SoC) indication for rechargeable batteries. With the emergence of battery-powered devices with an increasing number of power-hungry features, accurately estimating the
Twitter for travel medicine providers.
Mills, Deborah J; Kohl, Sarah E
2016-03-01
Travel medicine practitioners, perhaps more so than medical practitioners working in other areas of medicine, require a constant flow of information to stay up-to-date, and provide best practice information and care to their patients. Many travel medicine providers are unaware of the popularity and potential of the Twitter platform. Twitter use among our travellers, as well as by physicians and health providers, is growing exponentially. There is a rapidly expanding body of published literature on this information tool. This review provides a brief overview of the ways Twitter is being used by health practitioners, the advantages that are peculiar to Twitter as a platform of social media, and how the interested practitioner can get started. Some key points about the dark side of Twitter are highlighted, as well as the potential benefits of using Twitter as a way to disseminate accurate medical information to the public. This article will help readers develop an increased understanding of Twitter as a tool for extracting useful facts and insights from the ever increasing volume of health information. © International Society of Travel Medicine, 2016. All rights reserved. Published by Oxford University Press. For permissions, please e-mail: journals.permissions@oup.com.
Bracken: estimating species abundance in metagenomics data
Directory of Open Access Journals (Sweden)
Jennifer Lu
2017-01-01
Full Text Available Metagenomic experiments attempt to characterize microbial communities using high-throughput DNA sequencing. Identification of the microorganisms in a sample provides information about the genetic profile, population structure, and role of microorganisms within an environment. Until recently, most metagenomics studies focused on high-level characterization at the level of phyla, or alternatively sequenced the 16S ribosomal RNA gene that is present in bacterial species. As the cost of sequencing has fallen, though, metagenomics experiments have increasingly used unbiased shotgun sequencing to capture all the organisms in a sample. This approach requires a method for estimating abundance directly from the raw read data. Here we describe a fast, accurate new method that computes the abundance at the species level using the reads collected in a metagenomics experiment. Bracken (Bayesian Reestimation of Abundance after Classification with KrakEN uses the taxonomic assignments made by Kraken, a very fast read-level classifier, along with information about the genomes themselves to estimate abundance at the species level, the genus level, or above. We demonstrate that Bracken can produce accurate species- and genus-level abundance estimates even when a sample contains multiple near-identical species.
Parametric cost estimation for space science missions
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
Directory of Open Access Journals (Sweden)
Shu Wing Ho
2011-12-01
Full Text Available The valuation of options and many other derivative instruments requires an estimation of exante or forward looking volatility. This paper adopts a Bayesian approach to estimate stock price volatility. We find evidence that overall Bayesian volatility estimates more closely approximate the implied volatility of stocks derived from traded call and put options prices compared to historical volatility estimates sourced from IVolatility.com (“IVolatility”. Our evidence suggests use of the Bayesian approach to estimate volatility can provide a more accurate measure of ex-ante stock price volatility and will be useful in the pricing of derivative securities where the implied stock price volatility cannot be observed.
Directory of Open Access Journals (Sweden)
DOORSAMY, W.
2017-05-01
Full Text Available The secondary level control of stand-alone distributed energy systems requires accurate online state information for effective coordination of its components. State estimation is possible through several techniques depending on the system's architecture and control philosophy. A conceptual design of an online state estimation system to provide nodal autonomy on DC systems is presented. The proposed estimation system uses local measurements - at each node - to obtain an aggregation of the system's state required for nodal self-control without the need for external communication with other nodes or a central controller. The recursive least-squares technique is used in conjunction with stigmergic collaboration to implement the state estimation system. Numerical results are obtained using a Matlab/Simulink model and experimentally validated in a laboratory setting. Results indicate that the proposed system provides accurate estimation and fast updating during both quasi-static and transient states.
Hartveit, Espen; Veruki, Margaret Lin
2010-03-15
Accurate measurement of the junctional conductance (G(j)) between electrically coupled cells can provide important information about the functional properties of coupling. With the development of tight-seal, whole-cell recording, it became possible to use dual, single-electrode voltage-clamp recording from pairs of small cells to measure G(j). Experiments that require reduced perturbation of the intracellular environment can be performed with high-resistance pipettes or the perforated-patch technique, but an accompanying increase in series resistance (R(s)) compromises voltage-clamp control and reduces the accuracy of G(j) measurements. Here, we present a detailed analysis of methodologies available for accurate determination of steady-state G(j) and related parameters under conditions of high R(s), using continuous or discontinuous single-electrode voltage-clamp (CSEVC or DSEVC) amplifiers to quantify the parameters of different equivalent electrical circuit model cells. Both types of amplifiers can provide accurate measurements of G(j), with errors less than 5% for a wide range of R(s) and G(j) values. However, CSEVC amplifiers need to be combined with R(s)-compensation or mathematical correction for the effects of nonzero R(s) and finite membrane resistance (R(m)). R(s)-compensation is difficult for higher values of R(s) and leads to instability that can damage the recorded cells. Mathematical correction for R(s) and R(m) yields highly accurate results, but depends on accurate estimates of R(s) throughout an experiment. DSEVC amplifiers display very accurate measurements over a larger range of R(s) values than CSEVC amplifiers and have the advantage that knowledge of R(s) is unnecessary, suggesting that they are preferable for long-duration experiments and/or recordings with high R(s). Copyright (c) 2009 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Changgan SHU
2014-09-01
Full Text Available In the standard root multiple signal classification algorithm, the performance of direction of arrival estimation will reduce and even lose effect in circumstances that a low signal noise ratio and a small signals interval. By reconstructing and weighting the covariance matrix of received signal, the modified algorithm can provide more accurate estimation results. The computer simulation and performance analysis are given next, which show that under the condition of lower signal noise ratio and stronger correlation between signals, the proposed modified algorithm could provide preferable azimuth estimating performance than the standard method.
Testing the hierarchical assembly of massive galaxies using accurate merger rates out to z ˜ 1.5
Rodrigues, Myriam; Puech, M.; Flores, H.; Hammer, F.; Pirzkal, N.
2018-04-01
We established an accurate comparison between observationally and theoretically estimated major merger rates over a large range of mass (log Mbar/M⊙ =9.9-11.4) and redshift (z = 0.7-1.6). For this, we combined a new estimate of the merger rate from an exhaustive count of pairs within the virial radius of massive galaxies at z ˜ 1.265 and cross-validated with their morphology, with estimates from the morpho-kinematic analysis of two other samples. Theoretical predictions were estimated using semi-empirical models with inputs matching the properties of the observed samples, while specific visibility time-scales scaled to the observed samples were used. Both theory and observations are found to agree within 30 per cent of the observed value, which provides strong support to the hierarchical assembly of galaxies over the probed ranges of mass and redshift. Here, we find that ˜60 per cent of population of local massive (Mstellar =1010.3-11.6 M⊙) galaxies would have undergone a wet major merger since z = 1.5, consistently with previous studies. Such recent mergers are expected to result in the (re-)formation of a significant fraction of local disc galaxies.
Medical service provider networks.
Mougeot, Michel; Naegelen, Florence
2018-05-17
In many countries, health insurers or health plans choose to contract either with any willing providers or with preferred providers. We compare these mechanisms when two medical services are imperfect substitutes in demand and are supplied by two different firms. In both cases, the reimbursement is higher when patients select the in-network provider(s). We show that these mechanisms yield lower prices, lower providers' and insurer's profits, and lower expense than in the uniform-reimbursement case. Whatever the degree of product differentiation, a not-for-profit insurer should prefer selective contracting and select a reimbursement such that the out-of-pocket expense is null. Although all providers join the network under any-willing-provider contracting in the absence of third-party payment, an asymmetric equilibrium may exist when this billing arrangement is implemented. Copyright © 2018 John Wiley & Sons, Ltd.
Comparison of PIV with 4D-Flow in a physiological accurate flow phantom
Sansom, Kurt; Balu, Niranjan; Liu, Haining; Aliseda, Alberto; Yuan, Chun; Canton, Maria De Gador
2016-11-01
Validation of 4D MRI flow sequences with planar particle image velocimetry (PIV) is performed in a physiologically-accurate flow phantom. A patient-specific phantom of a carotid artery is connected to a pulsatile flow loop to simulate the 3D unsteady flow in the cardiovascular anatomy. Cardiac-cycle synchronized MRI provides time-resolved 3D blood velocity measurements in clinical tool that is promising but lacks a robust validation framework. PIV at three different Reynolds numbers (540, 680, and 815, chosen based on +/- 20 % of the average velocity from the patient-specific CCA waveform) and four different Womersley numbers (3.30, 3.68, 4.03, and 4.35, chosen to reflect a physiological range of heart rates) are compared to 4D-MRI measurements. An accuracy assessment of raw velocity measurements and a comparison of estimated and measureable flow parameters such as wall shear stress, fluctuating velocity rms, and Lagrangian particle residence time, will be presented, with justification for their biomechanics relevance to the pathophysiology of arterial disease: atherosclerosis and intimal hyperplasia. Lastly, the framework is applied to a new 4D-Flow MRI sequence and post processing techniques to provide a quantitative assessment with the benchmarked data. Department of Education GAANN Fellowship.
Canadian consumer issues in accurate and fair electricity metering
International Nuclear Information System (INIS)
2000-07-01
The Public Interest Advocacy Centre (PIAC), located in Ottawa, participates in regulatory proceedings concerning electricity and natural gas to support public and consumer interest. PIAC provides legal representation, research and policy support and public advocacy. A study aimed toward the determination of the issues at stake for residential electricity consumers in the provision of fair and accurate electricity metering, was commissioned by Measurement Canada in consultation with Industry Canada's Consumer Affairs. The metering of electricity must be carried out in a fair and efficient manner for all residential consumers. The Electricity, Gas and Inspection Act was developed to ensure compliance with standards for measuring instrumentation. The accurate metering of electricity through the distribution systems for electricity in Canada represents the main focus of this study and report. The role played by Measurement Canada and the increased efficiencies of service delivery by Measurement Canada or the changing of electricity market conditions are of special interest. The role of Measurement Canada was explained, as were the concerns of residential consumers. A comparison was then made between the interests of residential consumers and those of commercial and industrial electricity consumers in electricity metering. Selected American and Commonwealth jurisdictions were reviewed in light of their electricity metering practices. A section on compliance and conflict resolution was included, in addition to a section on the use of voluntary codes for compliance and conflict resolution
Estimating Canopy Dark Respiration for Crop Models
Monje Mejia, Oscar Alberto
2014-01-01
Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.
DEFF Research Database (Denmark)
Pedersen, Marie; Siroux, Valérie; Pin, Isabelle
2013-01-01
BACKGROUND: Spatially-resolved air pollution models can be developed in large areas. The resulting increased exposure contrasts and population size offer opportunities to better characterize the effect of atmospheric pollutants on respiratory health. However the heterogeneity of these areas may......: Simulations indicated that adjustment for area limited the bias due to unmeasured confounders varying with area at the costs of a slight decrease in statistical power. In our cohort, rural and urban areas differed for air pollution levels and for many factors associated with respiratory health and exposure....... Area tended to modify effect measures of air pollution on respiratory health. CONCLUSIONS: Increasing the size of the study area also increases the potential for residual confounding. Our simulations suggest that adjusting for type of area is a good option to limit residual confounding due to area...
A new family of Fisher-curves estimates Fisher's alpha more accurately
Schulte, R.P.O.; Lantinga, E.A.; Hawkins, M.J.
2005-01-01
Fisher's alpha is a satisfactory scale-independent indicator of biodiversity. However, alpha may be underestimated in communities in which the spatial arrangement of individuals is strongly clustered, or in which the total number of species does not tend to infinity. We have extended Fisher's curve
Directory of Open Access Journals (Sweden)
Yunpeng Song
2015-03-01
Full Text Available Measurement of force on a micro- or nano-Newton scale is important when exploring the mechanical properties of materials in the biophysics and nanomechanical fields. The atomic force microscope (AFM is widely used in microforce measurement. The cantilever probe works as an AFM force sensor, and the spring constant of the cantilever is of great significance to the accuracy of the measurement results. This paper presents a normal spring constant calibration method with the combined use of an electromagnetic balance and a homemade AFM head. When the cantilever presses the balance, its deflection is detected through an optical lever integrated in the AFM head. Meanwhile, the corresponding bending force is recorded by the balance. Then the spring constant can be simply calculated using Hooke’s law. During the calibration, a feedback loop is applied to control the deflection of the cantilever. Errors that may affect the stability of the cantilever could be compensated rapidly. Five types of commercial cantilevers with different shapes, stiffness, and operating modes were chosen to evaluate the performance of our system. Based on the uncertainty analysis, the expanded relative standard uncertainties of the normal spring constant of most measured cantilevers are believed to be better than 2%.
DEFF Research Database (Denmark)
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty
2018-01-01
OBJECTIVE: Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-re...
Directory of Open Access Journals (Sweden)
Mini Joseph
2017-01-01
Full Text Available Background: The accuracy of existing predictive equations to determine the resting energy expenditure (REE of professional weightlifters remains scarcely studied. Our study aimed at assessing the REE of male Asian Indian weightlifters with indirect calorimetry and to compare the measured REE (mREE with published equations. A new equation using potential anthropometric variables to predict REE was also evaluated. Materials and Methods: REE was measured on 30 male professional weightlifters aged between 17 and 28 years using indirect calorimetry and compared with the eight formulas predicted by Harris–Benedicts, Mifflin-St. Jeor, FAO/WHO/UNU, ICMR, Cunninghams, Owen, Katch-McArdle, and Nelson. Pearson correlation coefficient, intraclass correlation coefficient, and multiple linear regression analysis were carried out to study the agreement between the different methods, association with anthropometric variables, and to formulate a new prediction equation for this population. Results: Pearson correlation coefficients between mREE and the anthropometric variables showed positive significance with suprailiac skinfold thickness, lean body mass (LBM, waist circumference, hip circumference, bone mineral mass, and body mass. All eight predictive equations underestimated the REE of the weightlifters when compared with the mREE. The highest mean difference was 636 kcal/day (Owen, 1986 and the lowest difference was 375 kcal/day (Cunninghams, 1980. Multiple linear regression done stepwise showed that LBM was the only significant determinant of REE in this group of sportspersons. A new equation using LBM as the independent variable for calculating REE was computed. REE for weightlifters = −164.065 + 0.039 (LBM (confidence interval −1122.984, 794.854]. This new equation reduced the mean difference with mREE by 2.36 + 369.15 kcal/day (standard error = 67.40. Conclusion: The significant finding of this study was that all the prediction equations underestimated the REE. The LBM was the sole determinant of REE in this population. In the absence of indirect calorimetry, the REE equation developed by us using LBM is a better predictor for calculating REE of professional male weightlifters of this region.
Accurate Estimation of Target amounts Using Expanded BASS Model for Demand-Side Management
Kim, Hyun-Woong; Park, Jong-Jin; Kim, Jin-O.
2008-10-01
The electricity demand in Korea has rapidly increased along with a steady economic growth since 1970s. Therefore Korea has positively propelled not only SSM (Supply-Side Management) but also DSM (Demand-Side Management) activities to reduce investment cost of generating units and to save supply costs of electricity through the enhancement of whole national energy utilization efficiency. However study for rebate, which have influence on success or failure on DSM program, is not sufficient. This paper executed to modeling mathematically expanded Bass model considering rebates, which have influence on penetration amounts for DSM program. To reflect rebate effect more preciously, the pricing function using in expanded Bass model directly reflects response of potential participants for rebate level.
How to accurately estimate BH masses of AGN with double-peaked emission lines
Xue Guang Zhang; Deborah Dultzin; Ting Gui Wang
2008-01-01
Presentamos una nueva relación para determinar la masa virial del Agujero Negro central en Núcleos Activos de Galaxias con perfiles de doble pico en las líneas anchas de baja ionización. Se discute cuál es el parámetro adecuado para estimar la velocidad local de las regiones de emisión y la relación para estimar la distancia de éstas regiones a la fuente de ionización. Seleccionamos 17 objetos con perfiles de doble pico del SDSS y con líneas de absorción medibles para determinar las masas...
Automated and Accurate Estimation of Gene Family Abundance from Shotgun Metagenomes.
Directory of Open Access Journals (Sweden)
Stephen Nayfach
2015-11-01
Full Text Available Shotgun metagenomic DNA sequencing is a widely applicable tool for characterizing the functions that are encoded by microbial communities. Several bioinformatic tools can be used to functionally annotate metagenomes, allowing researchers to draw inferences about the functional potential of the community and to identify putative functional biomarkers. However, little is known about how decisions made during annotation affect the reliability of the results. Here, we use statistical simulations to rigorously assess how to optimize annotation accuracy and speed, given parameters of the input data like read length and library size. We identify best practices in metagenome annotation and use them to guide the development of the Shotgun Metagenome Annotation Pipeline (ShotMAP. ShotMAP is an analytically flexible, end-to-end annotation pipeline that can be implemented either on a local computer or a cloud compute cluster. We use ShotMAP to assess how different annotation databases impact the interpretation of how marine metagenome and metatranscriptome functional capacity changes across seasons. We also apply ShotMAP to data obtained from a clinical microbiome investigation of inflammatory bowel disease. This analysis finds that gut microbiota collected from Crohn's disease patients are functionally distinct from gut microbiota collected from either ulcerative colitis patients or healthy controls, with differential abundance of metabolic pathways related to host-microbiome interactions that may serve as putative biomarkers of disease.
Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.
In the absence of perfect state, the number of nuclei in their vegetative hyphae is one of the anamorphic features that separate Rhizoctonia solani from other Rhizoctonia-like fungi. Anamorphs of Rhizoctonia solani are typically multinucleate while the other Rhizoctonia species are binucleate. Howev...
Accurate Estimation of the Standard Binding Free Energy of Netropsin with DNA
Directory of Open Access Journals (Sweden)
Hong Zhang
2018-01-01
Full Text Available DNA is the target of chemical compounds (drugs, pollutants, photosensitizers, etc., which bind through non-covalent interactions. Depending on their structure and their chemical properties, DNA binders can associate to the minor or to the major groove of double-stranded DNA. They can also intercalate between two adjacent base pairs, or even replace one or two base pairs within the DNA double helix. The subsequent biological effects are strongly dependent on the architecture of the binding motif. Discriminating between the different binding patterns is of paramount importance to predict and rationalize the effect of a given compound on DNA. The structural characterization of DNA complexes remains, however, cumbersome at the experimental level. In this contribution, we employed all-atom molecular dynamics simulations to determine the standard binding free energy of DNA with netropsin, a well-characterized antiviral and antimicrobial drug, which associates to the minor groove of double-stranded DNA. To overcome the sampling limitations of classical molecular dynamics simulations, which cannot capture the large change in configurational entropy that accompanies binding, we resort to a series of potentials of mean force calculations involving a set of geometrical restraints acting on collective variables.
Power system frequency estimation based on an orthogonal decomposition method
Lee, Chih-Hung; Tsai, Men-Shen
2018-06-01
In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.
Automatic estimation of pressure-dependent rate coefficients.
Allen, Joshua W; Goldsmith, C Franklin; Green, William H
2012-01-21
A general framework is presented for accurately and efficiently estimating the phenomenological pressure-dependent rate coefficients for reaction networks of arbitrary size and complexity using only high-pressure-limit information. Two aspects of this framework are discussed in detail. First, two methods of estimating the density of states of the species in the network are presented, including a new method based on characteristic functional group frequencies. Second, three methods of simplifying the full master equation model of the network to a single set of phenomenological rates are discussed, including a new method based on the reservoir state and pseudo-steady state approximations. Both sets of methods are evaluated in the context of the chemically-activated reaction of acetyl with oxygen. All three simplifications of the master equation are usually accurate, but each fails in certain situations, which are discussed. The new methods usually provide good accuracy at a computational cost appropriate for automated reaction mechanism generation.
A hybrid method for accurate star tracking using star sensor and gyros.
Lu, Jiazhen; Yang, Lie; Zhang, Hao
2017-10-01
Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.
Fast and accurate automated cell boundary determination for fluorescence microscopy
Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider
2013-07-01
Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.
Providing free autopoweroff plugs
DEFF Research Database (Denmark)
Jensen, Carsten Lynge; Hansen, Lars Gårn; Fjordbak, Troels
2012-01-01
Experimental evidence of the effect of providing households with cheap energy saving technology is sparse. We present results from a field experiment in which autopoweroff plugs were provided free of charge to randomly selected households. We use propensity score matching to find treatment effects...
Hounsfield unit density accurately predicts ESWL success.
Magnuson, William J; Tomera, Kevin M; Lance, Raymond S
2005-01-01
Extracorporeal shockwave lithotripsy (ESWL) is a commonly used non-invasive treatment for urolithiasis. Helical CT scans provide much better and detailed imaging of the patient with urolithiasis including the ability to measure density of urinary stones. In this study we tested the hypothesis that density of urinary calculi as measured by CT can predict successful ESWL treatment. 198 patients were treated at Alaska Urological Associates with ESWL between January 2002 and April 2004. Of these 101 met study inclusion with accessible CT scans and stones ranging from 5-15 mm. Follow-up imaging demonstrated stone freedom in 74.2%. The overall mean Houndsfield density value for stone-free compared to residual stone groups were significantly different ( 93.61 vs 122.80 p ESWL for upper tract calculi between 5-15mm.
Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis
Energy Technology Data Exchange (ETDEWEB)
Wang, Feng, E-mail: fwang@unu.edu [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Huisman, Jaco [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Stevels, Ab [Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Baldé, Cornelis Peter [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Statistics Netherlands, Henri Faasdreef 312, 2492 JP Den Haag (Netherlands)
2013-11-15
Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e
Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis
International Nuclear Information System (INIS)
Wang, Feng; Huisman, Jaco; Stevels, Ab; Baldé, Cornelis Peter
2013-01-01
Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e
International Nuclear Information System (INIS)
Wei, Zhongbao; Lim, Tuti Mariana; Skyllas-Kazacos, Maria; Wai, Nyunt; Tseng, King Jet
2016-01-01
Highlights: • Battery model parameters and SOC co-estimation is investigated. • The model parameters and OCV are decoupled and estimated independently. • Multiple timescales are adopted to improve precision and stability. • SOC is online estimated without using the open-circuit cell. • The method is robust to aging levels, flow rates, and battery chemistries. - Abstract: A key function of battery management system (BMS) is to provide accurate information of the state of charge (SOC) in real time, and this depends directly on the precise model parameterization. In this paper, a novel multi-timescale estimator is proposed to estimate the model parameters and SOC for vanadium redox flow battery (VRB) in real time. The model parameters and OCV are decoupled and estimated independently, effectively avoiding the possibility of cross interference between them. The analysis of model sensitivity, stability, and precision suggests the necessity of adopting different timescales for each estimator independently. Experiments are conducted to assess the performance of the proposed method. Results reveal that the model parameters are online adapted accurately thus the periodical calibration on them can be avoided. The online estimated terminal voltage and SOC are both benchmarked with the reference values. The proposed multi-timescale estimator has the merits of fast convergence, high precision, and good robustness against the initialization uncertainty, aging states, flow rates, and also battery chemistries.
On the degrees of freedom of reduced-rank estimators in multivariate regression.
Mukherjee, A; Chen, K; Wang, N; Zhu, J
We study the effective degrees of freedom of a general class of reduced-rank estimators for multivariate regression in the framework of Stein's unbiased risk estimation. A finite-sample exact unbiased estimator is derived that admits a closed-form expression in terms of the thresholded singular values of the least-squares solution and hence is readily computable. The results continue to hold in the high-dimensional setting where both the predictor and the response dimensions may be larger than the sample size. The derived analytical form facilitates the investigation of theoretical properties and provides new insights into the empirical behaviour of the degrees of freedom. In particular, we examine the differences and connections between the proposed estimator and a commonly-used naive estimator. The use of the proposed estimator leads to efficient and accurate prediction risk estimation and model selection, as demonstrated by simulation studies and a data example.
Study of accurate volume measurement system for plutonium nitrate solution
Energy Technology Data Exchange (ETDEWEB)
Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works
1998-12-01
It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)
AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz
International Nuclear Information System (INIS)
Perley, R. A.; Butler, B. J.
2013-01-01
We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than ∼5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.
Development of fast and accurate Monte Carlo code MVP
International Nuclear Information System (INIS)
Mori, Takamasa
2001-01-01
The development work of fast and accurate Monte Carlo code MVP has started at JAERI in late 80s. From the beginning, the code was designed to utilize vector supercomputers and achieved higher computation speed by a factor of 10 or more compared with conventional codes. In 1994, the first version of MVP was released together with cross section libraries based on JENDL-3.1 and JENDL-3.2. In 1996, minor revision was made by adding several functions such as treatments of ENDF-B6 file 6 data, time dependent problem, and so on. Since 1996, several works have been carried out for the next version of MVP. The main works are (1) the development of continuous energy Monte Carlo burn-up calculation code MVP-BURN, (2) the development of a system to generate cross section libraries at arbitrary temperature, and (3) the study on error estimations and their biases in Monte Carlo eigenvalue calculations. This paper summarizes the main features of MVP, results of recent studies and future plans for MVP. (author)
DEFF Research Database (Denmark)
Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan
Process safety studies and assessments rely on accurate property data. Flammability data like the lower and upper flammability limit (LFL and UFL) play an important role in quantifying the risk of fire and explosion. If experimental values are not available for the safety analysis due to cost...... or time constraints, property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence....... In this study, the MG-GC-factors are estimated using a systematic data and model evaluation methodology in the following way: 1) Data. Experimental flammability data is used from AIChE DIPPR 801 Database. 2) Initialization and sequential parameter estimation. An approximation using linear algebra provides...
Definition of accurate reference pattern for the DTU-ESA VAST12 antenna
DEFF Research Database (Denmark)
Pivnenko, Sergey; Breinbjerg, Olav; Burgos, Sara
2009-01-01
In this paper, the DTU-ESA 12 GHz validation standard (VAST12) antenna and a dedicated measurement campaign carried out in 2007-2008 for the definition of its accurate reference pattern are first described. Next, a comparison between the results from the three involved measurement facilities...... is presented. Then, an accurate reference pattern of the VAST12 antenna is formed by averaging the three results taking into account the estimated uncertainties of each result. Finally, the potential use of the reference pattern for benchmarking of antenna measurement facilities is outlined....
Credential Service Provider (CSP)
Department of Veterans Affairs — Provides a VA operated Level 1 and Level 2 credential for individuals who require access to VA applications, yet cannot obtain a credential from another VA accepted...
U.S. Department of Health & Human Services — The MAX Provider Characteristics (PC) File Implementation Report describes the design, implementation, and results of the MAXPC prototype, which was based on three...
Overconfidence in Interval Estimates
Soll, Jack B.; Klayman, Joshua
2004-01-01
Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…
Ecosystem services provided by bats.
Kunz, Thomas H; Braun de Torrez, Elizabeth; Bauer, Dana; Lobova, Tatyana; Fleming, Theodore H
2011-03-01
Ecosystem services are the benefits obtained from the environment that increase human well-being. Economic valuation is conducted by measuring the human welfare gains or losses that result from changes in the provision of ecosystem services. Bats have long been postulated to play important roles in arthropod suppression, seed dispersal, and pollination; however, only recently have these ecosystem services begun to be thoroughly evaluated. Here, we review the available literature on the ecological and economic impact of ecosystem services provided by bats. We describe dietary preferences, foraging behaviors, adaptations, and phylogenetic histories of insectivorous, frugivorous, and nectarivorous bats worldwide in the context of their respective ecosystem services. For each trophic ensemble, we discuss the consequences of these ecological interactions on both natural and agricultural systems. Throughout this review, we highlight the research needed to fully determine the ecosystem services in question. Finally, we provide a comprehensive overview of economic valuation of ecosystem services. Unfortunately, few studies estimating the economic value of ecosystem services provided by bats have been conducted to date; however, we outline a framework that could be used in future studies to more fully address this question. Consumptive goods provided by bats, such as food and guano, are often exchanged in markets where the market price indicates an economic value. Nonmarket valuation methods can be used to estimate the economic value of nonconsumptive services, including inputs to agricultural production and recreational activities. Information on the ecological and economic value of ecosystem services provided by bats can be used to inform decisions regarding where and when to protect or restore bat populations and associated habitats, as well as to improve public perception of bats. © 2011 New York Academy of Sciences.
Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.
2014-01-01
Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This
More accurate thermal neutron coincidence counting technique
International Nuclear Information System (INIS)
Baron, N.
1978-01-01
Using passive thermal neutron coincidence counting techniques, the accuracy of nondestructive assays of fertile material can be improved significantly using a two-ring detector. It was shown how the use of a function of the coincidence count rate ring-ratio can provide a detector response rate that is independent of variations in neutron detection efficiency caused by varying sample moderation. Furthermore, the correction for multiplication caused by SF- and (α,n)-neutrons is shown to be separable into the product of a function of the effective mass of 240 Pu (plutonium correction) and a function of the (α,n) reaction probability (matrix correction). The matrix correction is described by a function of the singles count rate ring-ratio. This correction factor is empirically observed to be identical for any combination of PuO 2 powder and matrix materials SiO 2 and MgO because of the similar relation of the (α,n)-Q value and (α,n)-reaction cross section among these matrix nuclei. However the matrix correction expression is expected to be different for matrix materials such as Na, Al, and/or Li. Nevertheless, it should be recognized that for comparison measurements among samples of similar matrix content, it is expected that some function of the singles count rate ring-ratio can be defined to account for variations in the matrix correction due to differences in the intimacy of mixture among the samples. Furthermore the magnitude of this singles count rate ring-ratio serves to identify the contaminant generating the (α,n)-neutrons. Such information is useful in process control
Temporal rainfall estimation using input data reduction and model inversion
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a
Capitaine, N.; Gontier, A.-M.
1993-08-01
Present observations using modern astrometric techniques are supposed to provide the Earth orientation parameters, and therefore UT1, with an accuracy better than ±1 mas. In practice, UT1 is determined through the intermediary of Greenwich Sidereal Time (GST), using both the conventional relationship between Greenwich Mean Sidereal Time (GMST) and UTl (Aoki et al. 1982) and the so-called "equation of the equinoxes" limited to the first order terms with respect to the nutation quantities. This highly complex relation between sidereal time and UT1 is not accurate at the milliaresecond level which gives rise to spurious terms of milliaresecond amplitude in the derived UTl. A more complete relationship between GST and UT1 has been recommended by Aoki & Kinoshita (1983) and Aoki (1991) taking into account the second order terms in the difference between GST and GM ST, the largest one having an amplitude of 2.64 mas and a 18.6 yr-period. This paper explains how this complete expansion of GST implicitly uses the concept of "nonrotating origin" (NRO) as proposed by Guinot in 1979 and would, therefore, provide a more accurate value of UTl and consequently of the Earth's angular velocity. This paper shows, moreover, that such a procedure would be simplified and conceptually clarified by the explicit use of the NRO as previously proposed (Guinot 1979; Capitaine et al. 1986). The two corresponding options (implicit or explicit use of the NRO) are shown to be equivalent for defining the specific Earth's angle of rotation and then UT1. The of the use of such an accurate procedure which has been proposed in the new IERS standards (McCarthy 1992a) instead of the usual one are estimated for the practical derivation of UT1.
Organ volume estimation using SPECT
Zaidi, H
1996-01-01
Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...
Estimates of wildland fire emissions
Yongqiang Liu; John J. Qu; Wanting Wang; Xianjun Hao
2013-01-01
Wildland fire missions can significantly affect regional and global air quality, radiation, climate, and the carbon cycle. A fundamental and yet challenging prerequisite to understanding the environmental effects is to accurately estimate fire emissions. This chapter describes and analyzes fire emission calculations. Various techniques (field measurements, empirical...
Directory of Open Access Journals (Sweden)
Patrick Habecker
Full Text Available Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations via traditional survey tools such as telephone or mail surveys--by asking a representative sample to estimate the number of people they know who are members of such a "hidden" subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation "trimming" to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.
DEFF Research Database (Denmark)
Legeais, Jean-Francois; Cazenave, Anny; Larnicol, Gille
Sea level is a very sensitive index of climate change and variability. Sea level integrates the ocean warming, mountain glaciers and ice sheet melting. Understanding the sea level variability and changes implies an accurate monitoring of the sea level variable at climate scales, in addition...... to understanding the ocean variability and the exchanges between ocean, land, cryosphere, and atmosphere. That is why Sea Level is one of the Essential Climate Variables (ECV) selected in the frame of the ESA Climate Change Initiative (CCI) program. It aims at providing long-term monitoring of the sea level ECV...... validation, performed by several groups of the ocean and climate modeling community. At last, the main improvements derived from the algorithms development dedicated to the 2016 full reprocessing of the dataset are described. Efforts have also focused on the improvement of the sea level estimation...
The KFM, A Homemade Yet Accurate and Dependable Fallout Meter
Energy Technology Data Exchange (ETDEWEB)
Kearny, C.H.
2001-11-20
The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these
Provider software buyer's guide.
1994-03-01
To help long term care providers find new ways to improve quality of care and efficiency, Provider magazine presents the fourth annual listing of software firms marketing computer programs for all areas of nursing facility operations. On the following five pages, more than 80 software firms display their wares, with programs such as minimum data set and care planning, dietary, accounting and financials, case mix, and medication administration records. The guide also charts compatible hardware, integration ability, telephone numbers, company contacts, and easy-to-use reader service numbers.
Combining Neural Networks with Existing Methods to Estimate 1 in 100-Year Flood Event Magnitudes
Newson, A.; See, L.
2005-12-01
Over the last fifteen years artificial neural networks (ANN) have been shown to be advantageous for the solution of many hydrological modelling problems. The use of ANNs for flood magnitude estimation in ungauged catchments, however, is a relatively new and under researched area. In this paper ANNs are used to make estimates of the magnitude of the 100-year flood event (Q100) for a number of ungauged catchments. The data used in this study were provided by the Centre for Ecology and Hydrology's Flood Estimation Handbook (FEH), which contains information on catchments across the UK. Sixteen catchment descriptors for 719 catchments were used to train an ANN, which was split into a training, validation and test data set. The goodness-of-fit statistics on the test data set indicated good model performance, with an r-squared value of 0.8 and a coefficient of efficiency of 79 percent. Data for twelve ungauged catchments were then put through the trained ANN to produce estimates of Q100. Two other accepted methodologies were also employed: the FEH statistical method and the FSR (Flood Studies Report) design storm technique, both of which are used to produce flood frequency estimates. The advantage of developing an ANN model is that it provides a third figure to aid a hydrologist in making an accurate estimate. For six of the twelve catchments, there was a relatively low spread between estimates. In these instances, an estimate of Q100 could be made with a fair degree of certainty. Of the remaining six catchments, three had areas greater than 1000km2, which means the FSR design storm estimate cannot be used. Armed with the ANN model and the FEH statistical method the hydrologist still has two possible estimates to consider. For these three catchments, the estimates were also fairly similar, providing additional confidence to the estimation. In summary, the findings of this study have shown that an accurate estimation of Q100 can be made using the catchment descriptors of
Accurate evaluation for the biofilm-activated sludge reactor using graphical techniques
Fouad, Moharram; Bhargava, Renu
2018-05-01
A complete graphical solution is obtained for the completely mixed biofilm-activated sludge reactor (hybrid reactor). The solution consists of a series of curves deduced from the principal equations of the hybrid system after converting them in dimensionless form. The curves estimate the basic parameters of the hybrid system such as suspended biomass concentration, sludge residence time, wasted mass of sludge, and food to biomass ratio. All of these parameters can be expressed as functions of hydraulic retention time, influent substrate concentration, substrate concentration in the bulk, stagnant liquid layer thickness, and the minimum substrate concentration which can maintain the biofilm growth in addition to the basic kinetics of the activated sludge process in which all these variables are expressed in a dimensionless form. Compared to other solutions of such system these curves are simple, easy to use, and provide an accurate tool for analyzing such system based on fundamental principles. Further, these curves may be used as a quick tool to get the effect of variables change on the other parameters and the whole system.
Accurate and cost-effective MTF measurement system for lens modules of digital cameras
Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu
2007-01-01
For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.
Accurate Measurement of the Effects of All Amino-Acid Mutations on Influenza Hemagglutinin.
Doud, Michael B; Bloom, Jesse D
2016-06-03
Influenza genes evolve mostly via point mutations, and so knowing the effect of every amino-acid mutation provides information about evolutionary paths available to the virus. We and others have combined high-throughput mutagenesis with deep sequencing to estimate the effects of large numbers of mutations to influenza genes. However, these measurements have suffered from substantial experimental noise due to a variety of technical problems, the most prominent of which is bottlenecking during the generation of mutant viruses from plasmids. Here we describe advances that ameliorate these problems, enabling us to measure with greatly improved accuracy and reproducibility the effects of all amino-acid mutations to an H1 influenza hemagglutinin on viral replication in cell culture. The largest improvements come from using a helper virus to reduce bottlenecks when generating viruses from plasmids. Our measurements confirm at much higher resolution the results of previous studies suggesting that antigenic sites on the globular head of hemagglutinin are highly tolerant of mutations. We also show that other regions of hemagglutinin-including the stalk epitopes targeted by broadly neutralizing antibodies-have a much lower inherent capacity to tolerate point mutations. The ability to accurately measure the effects of all influenza mutations should enhance efforts to understand and predict viral evolution.
Accurate Measurement of the Effects of All Amino-Acid Mutations on Influenza Hemagglutinin
Directory of Open Access Journals (Sweden)
Michael B. Doud
2016-06-01
Full Text Available Influenza genes evolve mostly via point mutations, and so knowing the effect of every amino-acid mutation provides information about evolutionary paths available to the virus. We and others have combined high-throughput mutagenesis with deep sequencing to estimate the effects of large numbers of mutations to influenza genes. However, these measurements have suffered from substantial experimental noise due to a variety of technical problems, the most prominent of which is bottlenecking during the generation of mutant viruses from plasmids. Here we describe advances that ameliorate these problems, enabling us to measure with greatly improved accuracy and reproducibility the effects of all amino-acid mutations to an H1 influenza hemagglutinin on viral replication in cell culture. The largest improvements come from using a helper virus to reduce bottlenecks when generating viruses from plasmids. Our measurements confirm at much higher resolution the results of previous studies suggesting that antigenic sites on the globular head of hemagglutinin are highly tolerant of mutations. We also show that other regions of hemagglutinin—including the stalk epitopes targeted by broadly neutralizing antibodies—have a much lower inherent capacity to tolerate point mutations. The ability to accurately measure the effects of all influenza mutations should enhance efforts to understand and predict viral evolution.
Accurate evolutions of inspiralling and magnetized neutron stars: Equal-mass binaries
International Nuclear Information System (INIS)
Giacomazzo, Bruno; Rezzolla, Luciano; Baiotti, Luca
2011-01-01
By performing new, long and numerically accurate general-relativistic simulations of magnetized, equal-mass neutron-star binaries, we investigate the role that realistic magnetic fields may have in the evolution of these systems. In particular, we study the evolution of the magnetic fields and show that they can influence the survival of the hypermassive neutron star produced at the merger by accelerating its collapse to a black hole. We also provide evidence that, even if purely poloidal initially, the magnetic fields produced in the tori surrounding the black hole have toroidal and poloidal components of equivalent strength. When estimating the possibility that magnetic fields could have an impact on the gravitational-wave signals emitted by these systems either during the inspiral or after the merger, we conclude that for realistic magnetic-field strengths B 12 G such effects could be detected, but only marginally, by detectors such as advanced LIGO or advanced Virgo. However, magnetically induced modifications could become detectable in the case of small-mass binaries and with the development of gravitational-wave detectors, such as the Einstein Telescope, with much higher sensitivities at frequencies larger than ≅2 kHz.
Accurate, fully-automated NMR spectral profiling for metabolomics.
Directory of Open Access Journals (Sweden)
Siamak Ravanbakhsh
Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of
An accurate solver for forward and inverse transport
International Nuclear Information System (INIS)
Monard, Francois; Bal, Guillaume
2010-01-01
This paper presents a robust and accurate way to solve steady-state linear transport (radiative transfer) equations numerically. Our main objective is to address the inverse transport problem, in which the optical parameters of a domain of interest are reconstructed from measurements performed at the domain's boundary. This inverse problem has important applications in medical and geophysical imaging, and more generally in any field involving high frequency waves or particles propagating in scattering environments. Stable solutions of the inverse transport problem require that the singularities of the measurement operator, which maps the optical parameters to the available measurements, be captured with sufficient accuracy. This in turn requires that the free propagation of particles be calculated with care, which is a difficult problem on a Cartesian grid. A standard discrete ordinates method is used for the direction of propagation of the particles. Our methodology to address spatial discretization is based on rotating the computational domain so that each direction of propagation is always aligned with one of the grid axes. Rotations are performed in the Fourier domain to achieve spectral accuracy. The numerical dispersion of the propagating particles is therefore minimal. As a result, the ballistic and single scattering components of the transport solution are calculated robustly and accurately. Physical blurring effects, such as small angular diffusion, are also incorporated into the numerical tool. Forward and inverse calculations performed in a two-dimensional setting exemplify the capabilities of the method. Although the methodology might not be the fastest way to solve transport equations, its physical accuracy provides us with a numerical tool to assess what can and cannot be reconstructed in inverse transport theory.
Energy Technology Data Exchange (ETDEWEB)
Jung, Hannes [DESY, Hamburg (Germany); De Roeck, Albert [CERN, Genf (Switzerland); Bartles, Jochen [Univ. Hamburg (DE). Institut fuer Theoretische Physik II] (and others)
2008-09-15
More than 100 people participated in a discussion session at the DIS08 workshop on the topic What HERA may provide. A summary of the discussion with a structured outlook and list of desirable measurements and theory calculations is given. (orig.)
International Nuclear Information System (INIS)
Jung, Hannes; De Roeck, Albert; Bartles, Jochen
2008-09-01
More than 100 people participated in a discussion session at the DIS08 workshop on the topic What HERA may provide. A summary of the discussion with a structured outlook and list of desirable measurements and theory calculations is given. (orig.)
U.S. Department of Health & Human Services — The POS file consists of two data files, one for CLIA labs and one for 18 other provider types. The file names are CLIA and OTHER. If downloading the file, note it...
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
Directory of Open Access Journals (Sweden)
J. Farlin
2013-05-01
Full Text Available Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and have been used to solve entirely different problems. We show that by combining two classical models, namely the Boussinesq equation describing spring baseflow recession, and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean transit time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater transit time that can refine those obtained from tritium measurements. The approach is illustrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the actual time of trend reversal and the rate of change agreed extremely well with the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating a stronger influence of continuous groundwater recharge during the summer months.
Time-driven Activity-based Costing More Accurately Reflects Costs in Arthroplasty Surgery.
Akhavan, Sina; Ward, Lorrayne; Bozic, Kevin J
2016-01-01
categories with the most variability between TA and TDABC estimates were operating room services and room and board. Traditional hospital cost accounting systems overestimate the costs associated with many surgical procedures, including primary TJA. TDABC provides a more accurate measure of true resource use associated with TJAs and can be used to identify high-cost/high-variability processes that can be targeted for process/quality improvement. Level III, therapeutic study.
SPHYNX: an accurate density-based SPH method for astrophysical applications
Cabezón, R. M.; García-Senz, D.; Figueira, J.
2017-10-01
Aims: Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here. Methods: SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code. Results: A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes. Conclusions: On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial
Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data
Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej
2016-04-01
GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.
Towards a less costly but accurate test of gastric emptying and small bowel transit
Energy Technology Data Exchange (ETDEWEB)
Camilleri, M.; Zinsmeister, A.R.; Greydanus, M.P.; Brown, M.L.; Proano, M. (Mayo Clinic and Foundation, Rochester, MN (USA))
1991-05-01
Our aim is to develop a less costly but accurate test of stomach emptying and small bowel transit by utilizing selected scintigraphic observations 1-6 hr after ingestion of a radiolabeled solid meal. These selected data were compared with more detailed analyses that require multiple scans and labor-intensive technical support. A logistic discriminant analysis was used to estimate the sensitivity and specificity of selected summaries of scintigraphic transit measurements. We studied 14 patients with motility disorders (eight neuropathic and six myopathic, confirmed by standard gastrointestinal manometry) and 37 healthy subjects. The patient group had abnormal gastric emptying (GE) and small bowel transit time (SBTT). The proportion of radiolabel retained in the stomach from 2 to 4 hr (GE 2 hr, GE 3 hr, GE 4 hr), as well as the proportion filling the colon at 4 and 6 hr (CF 4 hr, CF 6 hr) were individually able to differentiate health from disease (P less than 0.05 for each). From the logistic discriminant model, an estimated sensitivity of 93% resulted in similar specificities for detailed and selected transit parameters for gastric emptying (range: 62-70%). Similarly, combining selected observations, such as GE 4 hr with CF 6 hr, had a specificity of 76%, which was similar to the specificity of combinations of more detailed analyses. Based on the present studies and future confirmation in a larger number of patients, including those with less severe motility disorders, the 2-, 4-, and 6-hr scans with quantitation of proportions of counts in stomach and colon should provide a useful, relatively inexpensive strategy to identify and monitor motility disorders in clinical and epidemiologic studies.
Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint
Energy Technology Data Exchange (ETDEWEB)
Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.
2014-11-01
Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.
Perea-Resa, Carlos; Hernández-Verdeja, Tamara; López-Cobollo, Rosa; Castellano, María del Mar; Salinas, Julio
2012-01-01
In yeast and animals, SM-like (LSM) proteins typically exist as heptameric complexes and are involved in different aspects of RNA metabolism. Eight LSM proteins, LSM1 to 8, are highly conserved and form two distinct heteroheptameric complexes, LSM1-7 and LSM2-8,that function in mRNA decay and splicing, respectively. A search of the Arabidopsis thaliana genome identifies 11 genes encoding proteins related to the eight conserved LSMs, the genes encoding the putative LSM1, LSM3, and LSM6 proteins being duplicated. Here, we report the molecular and functional characterization of the Arabidopsis LSM gene family. Our results show that the 11 LSM genes are active and encode proteins that are also organized in two different heptameric complexes. The LSM1-7 complex is cytoplasmic and is involved in P-body formation and mRNA decay by promoting decapping. The LSM2-8 complex is nuclear and is required for precursor mRNA splicing through U6 small nuclear RNA stabilization. More importantly, our results also reveal that these complexes are essential for the correct turnover and splicing of selected development-related mRNAs and for the normal development of Arabidopsis. We propose that LSMs play a critical role in Arabidopsis development by ensuring the appropriate development-related gene expression through the regulation of mRNA splicing and decay. PMID:23221597
Adaptive Spectral Doppler Estimation
DEFF Research Database (Denmark)
Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2009-01-01
. The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...
Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem
Younis, Mohammad I.
2014-08-17
We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data in the literature and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximations they are based on. In such cases, multi-mode reduced order models need to be utilized.
Measuring Accurate Body Parameters of Dressed Humans with Large-Scale Motion Using a Kinect Sensor
Directory of Open Access Journals (Sweden)
Sidan Du
2013-08-01
Full Text Available Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clothes. Because motion can drive clothes attached to the human body loosely or tightly, we adopt a space-time analysis to mine the information across the posture variations. Using this information, we recover the human body, regardless of the effect of clothes, and measure the human body parameters accurately. Experimental results show that our system can perform more accurate parameter estimation on the human body than state-of-the-art methods.
Accurate mass and velocity functions of dark matter haloes
Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly
2017-08-01
N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z publicly available in the Skies and Universes data base.
Accurate calculation of field and carrier distributions in doped semiconductors
Directory of Open Access Journals (Sweden)
Wenji Yang
2012-06-01
Full Text Available We use the numerical squeezing algorithm(NSA combined with the shooting method to accurately calculate the built-in fields and carrier distributions in doped silicon films (SFs in the micron and sub-micron thickness range and results are presented in graphical form for variety of doping profiles under different boundary conditions. As a complementary approach, we also present the methods and the results of the inverse problem (IVP - finding out the doping profile in the SFs for given field distribution. The solution of the IVP provides us the approach to arbitrarily design field distribution in SFs - which is very important for low dimensional (LD systems and device designing. Further more, the solution of the IVP is both direct and much easy for all the one-, two-, and three-dimensional semiconductor systems. With current efforts focused on the LD physics, knowing of the field and carrier distribution details in the LD systems will facilitate further researches on other aspects and hence the current work provides a platform for those researches.
Building Service Provider Capabilities
DEFF Research Database (Denmark)
Brandl, Kristin; Jaura, Manya; Ørberg Jensen, Peter D.
2015-01-01
In this paper we study whether and how the interaction between clients and the service providers contributes to the development of capabilities in service provider firms. In situations where such a contribution occurs, we analyze how different types of activities in the production process...... process. We find that clients influence the development of human capital capabilities and management capabilities in reciprocally produced services. While in sequential produced services clients influence the development of organizational capital capabilities and management capital capabilities....... of the services, such as sequential or reciprocal task activities, influence the development of different types of capabilities. We study five cases of offshore-outsourced knowledge-intensive business services that are distinguished according to their reciprocal or sequential task activities in their production...
International Nuclear Information System (INIS)
Mallozzi, P.J.; Epstein, H.M.
1985-01-01
This invention provides an apparatus for providing x-rays to an object that may be in an ordinary environment such as air at approximately atmospheric pressure. The apparatus comprises: means (typically a laser beam) for directing energy onto a target to produce x-rays of a selected spectrum and intensity at the target; a fluid-tight enclosure around the target; means for maintaining the pressure in the first enclosure substantially below atmospheric pressure; a fluid-tight second enclosure adjoining the first enclosure, the common wall portion having an opening large enough to permit x-rays to pass through but small enough to allow the pressure reducing means to evacuate gas from the first enclosure at least as fast as it enters through the opening; the second enclosure filled with a gas that is highly transparent to x-rays; the wall of the second enclosure to which the x-rays travel having a portion that is highly transparent to x-rays (usually a beryllium or plastic foil), so that the object to which the x-rays are to be provided may be located outside the second enclosure and adjacent thereto and thus receive the x-rays substantially unimpeded by air or other intervening matter. The apparatus is particularly suited to obtaining EXAFS (extended x-ray fine structure spectroscopy) data on a material
Why healthcare providers merge.
Postma, Jeroen; Roos, Anne-Fleur
2016-04-01
In many OECD countries, healthcare sectors have become increasingly concentrated as a result of mergers. However, detailed empirical insight into why healthcare providers merge is lacking. Also, we know little about the influence of national healthcare policies on mergers. We fill this gap in the literature by conducting a survey study on mergers among 848 Dutch healthcare executives, of which 35% responded (resulting in a study sample of 239 executives). A total of 65% of the respondents was involved in at least one merger between 2005 and 2012. During this period, Dutch healthcare providers faced a number of policy changes, including increasing competition, more pressure from purchasers, growing financial risks, de-institutionalisation of long-term care and decentralisation of healthcare services to municipalities. Our empirical study shows that healthcare providers predominantly merge to improve the provision of healthcare services and to strengthen their market position. Also efficiency and financial reasons are important drivers of merger activity in healthcare. We find that motives for merger are related to changes in health policies, in particular to the increasing pressure from competitors, insurers and municipalities.
Mojola, Sanyu A
2014-01-01
This paper draws on ethnographic and interview based fieldwork to explore accounts of intimate relationships between widowed women and poor young men that emerged in the wake of economic crisis and a devastating HIV epidemic among the Luo ethnic group in Western Kenya. I show how the cooptation of widow inheritance practices in the wake of an overwhelming number of widows as well as economic crisis resulted in widows becoming providing women and poor young men becoming kept men. I illustrate how widows in this setting, by performing a set of practices central to what it meant to be a man in this society – pursuing and providing for their partners - were effectively doing masculinity. I will also show how young men, rather than being feminized by being kept, deployed other sets of practices to prove their masculinity and live in a manner congruent with cultural ideals. I argue that ultimately, women’s practice of masculinity in large part seemed to serve patriarchal ends. It not only facilitated the fulfillment of patriarchal expectations of femininity – to being inherited – but also served, in the end, to provide a material base for young men’s deployment of legitimizing and culturally valued sets of masculine practice. PMID:25489121
Ant-inspired density estimation via random walks.
Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A
2017-10-03
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.
A global algorithm for estimating Absolute Salinity
Directory of Open Access Journals (Sweden)
T. J. McDougall
2012-12-01
Full Text Available The International Thermodynamic Equation of Seawater – 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density than does Practical Salinity.
When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic, Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg^{−1} in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p in the world ocean.
To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811. In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally.
Using Landsat Vegetation Indices to Estimate Impervious Surface Fractions for European Cities
DEFF Research Database (Denmark)
Kaspersen, Per Skougaard; Fensholt, Rasmus; Drews, Martin
2015-01-01
and applicability of vegetation indices (VI), from Landsat imagery, to estimate IS fractions for European cities. The accuracy of three different measures of vegetation cover is examined for eight urban areas at different locations in Europe. The Normalized Difference Vegetation Index (NDVI) and Soil Adjusted...... Vegetation Index (SAVI) are converted to IS fractions using a regression modelling approach. Also, NDVI is used to estimate fractional vegetation cover (FR), and consequently IS fractions. All three indices provide fairly accurate estimates (MAEs ≈ 10%, MBE’s
Savaux, Vincent
2014-01-01
This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
New estimates for human lung dimensions
International Nuclear Information System (INIS)
Kennedy, Christine; Sidavasan, Sivalal; Kramer, Gary
2008-01-01
Full text: The currently used lung dimensions in dosimetry were originally estimated in the 1940s from Army recruits. This study provides new estimates of lung dimensions based on images acquired from a sample from the general population (varying age and sex). Building accurate models, called phantoms, of the human lung requires that the spatial dimensions (length, width, and depth) be quantified, in addition to volume. Errors in dose estimates may result from improperly sized lungs as the counting efficiency of externally mounted detectors (e.g., in a lung counter) is dependent on the position of internally deposited radioactive material (i.e., the size of the lung). This study investigates the spatial dimensions of human lungs. Lung phantoms have previously been made in one of two sizes. The Lawrence Livermore National Laboratory Torso Phantom (LLNL) has deep, short lungs whose dimensions do not comply well with the data published in Report 23 (Reference Man) issued by the International Commission on Radiological Protection (ICRP). The Japanese Atomic Energy Research Institute Torso Phantom(JAERI), has longer, shallower lungs that also deviate from the ICRP values. However, careful examination of the ICRP recommended values shows that they are soft. In fact, they have been dropped from the ICRP's Report 89 which updates Report 23. Literature surveys have revealed a wealth of information on lung volume, but very little data on the spatial dimensions of human lungs. Better lung phantoms need to be constructed to more accurately represent a person so that dose estimates may be quantified more accurately in view of the new, lower, dose limits for occupationally exposed workers and the general public. Retrospective chest images of 60 patients who underwent imaging of the chest- lungs as part of their healthy persons occupational screening for lung disease were chosen. The chosen normal lung images represent the general population). Ages, gender and weight of the
Accurate characterization of organic thin film transistors in the presence of gate leakage current
Directory of Open Access Journals (Sweden)
Vinay K. Singh
2011-12-01
Full Text Available The presence of gate leakage through polymer dielectric in organic thin film transistors (OTFT prevents accurate estimation of transistor characteristics especially in subthreshold regime. To mitigate the impact of gate leakage on transfer characteristics and allow accurate estimation of mobility, subthreshold slope and on/off current ratio, a measurement technique involving simultaneous sweep of both gate and drain voltages is proposed. Two dimensional numerical device simulation is used to illustrate the validity of the proposed technique. Experimental results obtained with Pentacene/PMMA OTFT with significant gate leakage show a low on/off current ratio of ∼ 102 and subthreshold is 10 V/decade obtained using conventional measurement technique. The proposed technique reveals that channel on/off current ratio is more than two orders of magnitude higher at ∼104 and subthreshold slope is 4.5 V/decade.
CTER—Rapid estimation of CTF parameters with error assessment
Energy Technology Data Exchange (ETDEWEB)
Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)
2014-05-01
In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.
Providing Compassion through Flow
Directory of Open Access Journals (Sweden)
Lydia Royeen
2015-07-01
Full Text Available Meg Kral, MS, OTR/L, CLT, is the cover artist for the Summer 2015 issue of The Open Journal of Occupational Therapy. Her untitled piece of art is an oil painting and is a re-creation of a photograph taken while on vacation. Meg is currently supervisor of outpatient services at Rush University Medical Center. She is lymphedema certified and has a specific interest in breast cancer lymphedema. Art and occupational therapy serve similar purposes for Meg: both provide a sense of flow. She values the outcomes, whether it is a piece of art or improved functional status
Achieving target voriconazole concentrations more accurately in children and adolescents.
Neely, Michael; Margol, Ashley; Fu, Xiaowei; van Guilder, Michael; Bayard, David; Schumitzky, Alan; Orbach, Regina; Liu, Siyu; Louie, Stan; Hope, William
2015-01-01
Despite the documented benefit of voriconazole therapeutic drug monitoring, nonlinear pharmacokinetics make the timing of steady-state trough sampling and appropriate dose adjustments unpredictable by conventional methods. We developed a nonparametric population model with data from 141 previously richly sampled children and adults. We then used it in our multiple-model Bayesian adaptive control algorithm to predict measured concentrations and doses in a separate cohort of 33 pediatric patients aged 8 months to 17 years who were receiving voriconazole and enrolled in a pharmacokinetic study. Using all available samples to estimate the individual Bayesian posterior parameter values, the median percent prediction bias relative to a measured target trough concentration in the patients was 1.1% (interquartile range, -17.1 to 10%). Compared to the actual dose that resulted in the target concentration, the percent bias of the predicted dose was -0.7% (interquartile range, -7 to 20%). Using only trough concentrations to generate the Bayesian posterior parameter values, the target bias was 6.4% (interquartile range, -1.4 to 14.7%; P = 0.16 versus the full posterior parameter value) and the dose bias was -6.7% (interquartile range, -18.7 to 2.4%; P = 0.15). Use of a sample collected at an optimal time of 4 h after a dose, in addition to the trough concentration, resulted in a nonsignificantly improved target bias of 3.8% (interquartile range, -13.1 to 18%; P = 0.32) and a dose bias of -3.5% (interquartile range, -18 to 14%; P = 0.33). With the nonparametric population model and trough concentrations, our control algorithm can accurately manage voriconazole therapy in children independently of steady-state conditions, and it is generalizable to any drug with a nonparametric pharmacokinetic model. (This study has been registered at ClinicalTrials.gov under registration no. NCT01976078.). Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Lutnaes, Ola B; Teale, Andrew M; Helgaker, Trygve; Tozer, David J; Ruud, Kenneth; Gauss, Jürgen
2009-10-14
An accurate set of benchmark rotational g tensors and magnetizabilities are calculated using coupled-cluster singles-doubles (CCSD) theory and coupled-cluster single-doubles-perturbative-triples [CCSD(T)] theory, in a variety of basis sets consisting of (rotational) London atomic orbitals. The accuracy of the results obtained is established for the rotational g tensors by careful comparison with experimental data, taking into account zero-point vibrational corrections. After an analysis of the basis sets employed, extrapolation techniques are used to provide estimates of the basis-set-limit quantities, thereby establishing an accurate benchmark data set. The utility of the data set is demonstrated by examining a wide variety of density functionals for the calculation of these properties. None of the density-functional methods are competitive with the CCSD or CCSD(T) methods. The need for a careful consideration of vibrational effects is clearly illustrated. Finally, the pure coupled-cluster results are compared with the results of density-functional calculations constrained to give the same electronic density. The importance of current dependence in exchange-correlation functionals is discussed in light of this comparison.
International Nuclear Information System (INIS)
Jiang, R.
2009-01-01
It is difficult to find the optimal solution of the sequential age replacement policy for a finite-time horizon. This paper presents an accurate approximation to find an approximate optimal solution of the sequential replacement policy. The proposed approximation is computationally simple and suitable for any failure distribution. Their accuracy is illustrated by two examples. Based on the approximate solution, an approximate estimate for the total cost is derived.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Directory of Open Access Journals (Sweden)
Chris C. Gianfagna
2015-09-01
New hydrological insights for the region: Watershed area ratio was the most important basin parameter for estimating flow at upstream sites based on downstream flow. The area ratio alone explained 93% of the variance in the slopes of relationships between upstream and downstream flows. Regression analysis indicated that flow at any upstream point can be estimated by multiplying the flow at a downstream reference gage by the watershed area ratio. This method accurately predicted upstream flows at area ratios as low as 0.005. We also observed a very strong relationship (R2 = 0.79 between area ratio and flow–flow slopes in non-nested catchments. Our results indicate that a simple flow estimation method based on watershed area ratios is justifiable, and indeed preferred, for the estimation of daily streamflow in ungaged watersheds in the Catskills region.
Small Area Model-Based Estimators Using Big Data Sources
Directory of Open Access Journals (Sweden)
Marchetti Stefano
2015-06-01
Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.
Directory of Open Access Journals (Sweden)
López-Valcarce Roberto
2004-01-01
Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.
A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation
Directory of Open Access Journals (Sweden)
Shu Cai
2016-12-01
Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.
Directory of Open Access Journals (Sweden)
Kadhim Raheem
2015-02-01
Full Text Available This research will cover different aspects of estimating process of construction work in a desert area. The inherent difficulties which accompany the cost estimating of the construction works in desert environment in a developing country, will stem from the limited information available, resources scarcity, low level of skilled workers, the prevailing severe weather conditions and many others, which definitely don't provide a fair, reliable and accurate estimation. This study tries to present unit price to estimate the cost in preliminary phase of a project. Estimations are supported by developing mathematical equations based on the historical data of maintenance, new construction of managerial and school projects. Meanwhile, the research has determined the percentage of project items, in such a remote environment. Estimation equations suitable for remote areas have been formulated. Moreover, a procedure for unite price calculation is concluded.
[Estimation of PMI using late postmortem phenomena in the basis of 49 cases].
Wu, Yu-Feng; Zhu, Zhi-Wei; Pan, Lian-Lian; Zhou, Jia-Li
2012-12-01
To discuss the influencing factors of using late postmortem phenomena to estimate PMI and to provide experience for an accurate estimation. Forty-nine corpses of late postmortem were collected in Shaoxing City, Zhuji area from 2004 to 2011. The related factors were analyzed including season, scene, estimated PMI, exact PMI, cause of death and main factors effected PMI, etc. Of all 49 cases, 20 corpses were outdoor, 11 were indoor and 18 were in water. Thirty-seven cases were successful to estimate PMI and 12 cases were unsuccessful. The main factors affected PMI were infection, poisoning, human destruction and high-pressure electric shock, etc. In general, PMI can be correctly estimated by late postmortem phenomenon. When the cases included infection, poisoning and human destruction, we should estimate PMI with the comprehensive analysis.
Accurate formulas for the penalty caused by interferometric crosstalk
DEFF Research Database (Denmark)
Rasmussen, Christian Jørgen; Liu, Fenghai; Jeppesen, Palle
2000-01-01
New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas.......New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas....
A new, accurate predictive model for incident hypertension
DEFF Research Database (Denmark)
Völzke, Henry; Fung, Glenn; Ittermann, Till
2013-01-01
Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....
Accurate and Simple Calibration of DLP Projector Systems
DEFF Research Database (Denmark)
Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus
2014-01-01
does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...
Accurate Compton scattering measurements for N{sub 2} molecules
Energy Technology Data Exchange (ETDEWEB)
Kobayashi, Kohjiro [Advanced Technology Research Center, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515 (Japan); Itou, Masayoshi; Tsuji, Naruki; Sakurai, Yoshiharu [Japan Synchrotron Radiation Research Institute (JASRI), 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5198 (Japan); Hosoya, Tetsuo; Sakurai, Hiroshi, E-mail: sakuraih@gunma-u.ac.jp [Department of Production Science and Technology, Gunma University, 29-1 Hon-cho, Ota, Gunma 373-0057 (Japan)
2011-06-14
The accurate Compton profiles of N{sub 2} gas were measured using 121.7 keV synchrotron x-rays. The present accurate measurement proves the better agreement of the CI (configuration interaction) calculation than the Hartree-Fock calculation and suggests the importance of multi-excitation in the CI calculations for the accuracy of wavefunctions in ground states.
Energy providers: customer expectations
International Nuclear Information System (INIS)
Pridham, N.F.
1997-01-01
The deregulation of the gas and electric power industries, and how it will impact on customer service and pricing rates was discussed. This paper described the present situation, reviewed core competencies, and outlined future expectations. The bottom line is that major energy consumers are very conscious of energy costs and go to great lengths to keep them under control. At the same time, solutions proposed to reduce energy costs must benefit all classes of consumers, be they industrial, commercial, institutional or residential. Deregulation and competition at an accelerated pace is the most likely answer. This may be forced by external forces such as foreign energy providers who are eager to enter the Canadian energy market. It is also likely that the competition and convergence between gas and electricity is just the beginning, and may well be overshadowed by other deregulated industries as they determine their core competencies
Farmann, Alexander; Waag, Wladislaw; Marongiu, Andrea; Sauer, Dirk Uwe
2015-05-01
This work provides an overview of available methods and algorithms for on-board capacity estimation of lithium-ion batteries. An accurate state estimation for battery management systems in electric vehicles and hybrid electric vehicles is becoming more essential due to the increasing attention paid to safety and lifetime issues. Different approaches for the estimation of State-of-Charge, State-of-Health and State-of-Function are discussed and analyzed by many authors and researchers in the past. On-board estimation of capacity in large lithium-ion battery packs is definitely one of the most crucial challenges of battery monitoring in the aforementioned vehicles. This is mostly due to high dynamic operation and conditions far from those used in laboratory environments as well as the large variation in aging behavior of each cell in the battery pack. Accurate capacity estimation allows an accurate driving range prediction and accurate calculation of a battery's maximum energy storage capability in a vehicle. At the same time it acts as an indicator for battery State-of-Health and Remaining Useful Lifetime estimation.
Chen, Chen Hsiu; Kuo, Su Ching; Tang, Siew Tzuh
2017-05-01
No systematic meta-analysis is available on the prevalence of cancer patients' accurate prognostic awareness and differences in accurate prognostic awareness by publication year, region, assessment method, and service received. To examine the prevalence of advanced/terminal cancer patients' accurate prognostic awareness and differences in accurate prognostic awareness by publication year, region, assessment method, and service received. Systematic review and meta-analysis. MEDLINE, Embase, The Cochrane Library, CINAHL, and PsycINFO were systematically searched on accurate prognostic awareness in adult patients with advanced/terminal cancer (1990-2014). Pooled prevalences were calculated for accurate prognostic awareness by a random-effects model. Differences in weighted estimates of accurate prognostic awareness were compared by meta-regression. In total, 34 articles were retrieved for systematic review and meta-analysis. At best, only about half of advanced/terminal cancer patients accurately understood their prognosis (49.1%; 95% confidence interval: 42.7%-55.5%; range: 5.4%-85.7%). Accurate prognostic awareness was independent of service received and publication year, but highest in Australia, followed by East Asia, North America, and southern Europe and the United Kingdom (67.7%, 60.7%, 52.8%, and 36.0%, respectively; p = 0.019). Accurate prognostic awareness was higher by clinician assessment than by patient report (63.2% vs 44.5%, p cancer patients accurately understood their prognosis, with significant variations by region and assessment method. Healthcare professionals should thoroughly assess advanced/terminal cancer patients' preferences for prognostic information and engage them in prognostic discussion early in the cancer trajectory, thus facilitating their accurate prognostic awareness and the quality of end-of-life care decision-making.
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
A practical model for pressure probe system response estimation (with review of existing models)
Hall, B. F.; Povey, T.
2018-04-01
The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.
Energy Technology Data Exchange (ETDEWEB)
Jung, Hannes; /DESY; De Roeck, Albert; /CERN; Bartels, Jochen; /Hamburg U., Inst. Theor. Phys. II; Behnke, Olaf; Blumlein, Johannes; /DESY; Brodsky, Stanley; /SLAC /Durham U., IPPP; Cooper-Sarkar, Amanda; /Oxford U.; Deak, Michal; /DESY; Devenish, Robin; /Oxford U.; Diehl, Markus; /DESY; Gehrmann, Thomas; /Zurich U.; Grindhammer, Guenter; /Munich, Max Planck Inst.; Gustafson, Gosta; /CERN /Lund U., Dept. Theor. Phys.; Khoze, Valery; /Durham U., IPPP; Knutsson, Albert; /DESY; Klein, Max; /Liverpool U.; Krauss, Frank; /Durham U., IPPP; Kutak, Krzysztof; /DESY; Laenen, Eric; /NIKHEF, Amsterdam; Lonnblad, Leif; /Lund U., Dept. Theor. Phys.; Motyka, Leszek; /Hamburg U., Inst. Theor. Phys. II /Birmingham U. /Southern Methodist U. /DESY /Piemonte Orientale U., Novara /CERN /Paris, LPTHE /Hamburg U. /Penn State U.
2011-11-10
More than 100 people participated in a discussion session at the DIS08 workshop on the topic What HERA may provide. A summary of the discussion with a structured outlook and list of desirable measurements and theory calculations is given. The HERA accelerator and the HERA experiments H1, HERMES and ZEUS stopped running in the end of June 2007. This was after 15 years of very successful operation since the first collisions in 1992. A total luminosity of {approx} 500 pb{sup -1} has been accumulated by each of the collider experiments H1 and ZEUS. During the years the increasingly better understood and upgraded detectors and HERA accelerator have contributed significantly to this success. The physics program remains in full swing and plenty of new results were presented at DIS08 which are approaching the anticipated final precision, fulfilling and exceeding the physics plans and the previsions of the upgrade program. Most of the analyses presented at DIS08 were still based on the so called HERA I data sample, i.e. data taken until 2000, before the shutdown for the luminosity upgrade. This sample has an integrated luminosity of {approx} 100 pb{sup -1}, and the four times larger statistics sample from HERA II is still in the process of being analyzed.
Accurate reconstruction of hyperspectral images from compressive sensing measurements
Greer, John B.; Flake, J. C.
2013-05-01
The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.
Time scale controversy: Accurate orbital calibration of the early Paleogene
Roehl, U.; Westerhold, T.; Laskar, J.
2012-12-01
Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to 54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.
The place of highly accurate methods by RNAA in metrology
International Nuclear Information System (INIS)
Dybczynski, R.; Danko, B.; Polkowska-Motrenko, H.; Samczynski, Z.
2006-01-01
With the introduction of physical metrological concepts to chemical analysis which require that the result should be accompanied by uncertainty statement written down in terms of Sl units, several researchers started to consider lD-MS as the only method fulfilling this requirement. However, recent publications revealed that in certain cases also some expert laboratories using lD-MS and analyzing the same material, produced results for which their uncertainty statements did not overlap, what theoretically should not have taken place. This shows that no monopoly is good in science and it would be desirable to widen the set of methods acknowledged as primary in inorganic trace analysis. Moreover, lD-MS cannot be used for monoisotopic elements. The need for searching for other methods having similar metrological quality as the lD-MS seems obvious. In this paper, our long-time experience on devising highly accurate ('definitive') methods by RNAA for the determination of selected trace elements in biological materials is reviewed. The general idea of definitive methods based on combination of neutron activation with the highly selective and quantitative isolation of the indicator radionuclide by column chromatography followed by gamma spectrometric measurement is reminded and illustrated by examples of the performance of such methods when determining Cd, Co, Mo, etc. lt is demonstrated that such methods are able to provide very reliable results with very low levels of uncertainty traceable to Sl units
Concurrent and Accurate Short Read Mapping on Multicore Processors.
Martínez, Héctor; Tárraga, Joaquín; Medina, Ignacio; Barrachina, Sergio; Castillo, Maribel; Dopazo, Joaquín; Quintana-Ortí, Enrique S
2015-01-01
We introduce a parallel aligner with a work-flow organization for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, HPG Aligner SA (HPG Aligner SA is an open-source application. The software is available at http://www.opencb.org, exploits a suffix array to rapidly map a large fraction of the RNA fragments (reads), as well as leverages the accuracy of the Smith-Waterman algorithm to deal with conflictive reads. The aligner is enhanced with a careful strategy to detect splice junctions based on an adaptive division of RNA reads into small segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing crucial information for the successful alignment of the complete reads. The experimental results on a platform with Intel multicore technology report the parallel performance of HPG Aligner SA, on RNA reads of 100-400 nucleotides, which excels in execution time/sensitivity to state-of-the-art aligners such as TopHat 2+Bowtie 2, MapSplice, and STAR.
Accurate measurement of RF exposure from emerging wireless communication systems
International Nuclear Information System (INIS)
Letertre, Thierry; Toffano, Zeno; Monebhurrun, Vikass
2013-01-01
Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.
HIPPI: highly accurate protein family classification with ensembles of HMMs
Directory of Open Access Journals (Sweden)
Nam-phuong Nguyen
2016-11-01
Full Text Available Abstract Background Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. Results We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification. HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. Conclusion HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .
Toward accurate and fast iris segmentation for iris biometrics.
He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao
2009-09-01
Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.
Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?
Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim
2014-11-01
Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).
Structural versus Matching Estimation : Transmission Mechanisms in Armenia
Poghosyan, K.; Boldea, O.
2011-01-01
Opting for structural or reduced form estimation is often hard to justify if one wants to both learn about the structure of the economy and obtain accurate predictions. In this paper, we show that using both structural and reduced form estimates simultaneously can lead to more accurate policy
Structural versus matching estimation : Transmission mechanisms in Armenia
Poghosyan, K.; Boldea, O.
2013-01-01
Opting for structural or reduced form estimation is often hard to justify if one wants to both learn about the structure of the economy and obtain accurate predictions. In this paper, we show that using both structural and reduced form estimates simultaneously can lead to more accurate policy
Impact of microbial count distributions on human health risk estimates
DEFF Research Database (Denmark)
Ribeiro Duarte, Ana Sofia; Nauta, Maarten
2015-01-01
Quantitative microbiological risk assessment (QMRA) is influenced by the choice of the probability distribution used to describe pathogen concentrations, as this may eventually have a large effect on the distribution of doses at exposure. When fitting a probability distribution to microbial...... enumeration data, several factors may have an impact on the accuracy of that fit. Analysis of the best statistical fits of different distributions alone does not provide a clear indication of the impact in terms of risk estimates. Thus, in this study we focus on the impact of fitting microbial distributions...... on risk estimates, at two different concentration scenarios and at a range of prevalence levels. By using five different parametric distributions, we investigate whether different characteristics of a good fit are crucial for an accurate risk estimate. Among the factors studied are the importance...
Recommendations for the tuning of rare event probability estimators
International Nuclear Information System (INIS)
Balesdent, Mathieu; Morio, Jérôme; Marzat, Julien
2015-01-01
Being able to accurately estimate rare event probabilities is a challenging issue in order to improve the reliability of complex systems. Several powerful methods such as importance sampling, importance splitting or extreme value theory have been proposed in order to reduce the computational cost and to improve the accuracy of extreme probability estimation. However, the performance of these methods is highly correlated with the choice of tuning parameters, which are very difficult to determine. In order to highlight recommended tunings for such methods, an empirical campaign of automatic tuning on a set of representative test cases is conducted for splitting methods. It allows to provide a reduced set of tuning parameters that may lead to the reliable estimation of rare event probability for various problems. The relevance of the obtained result is assessed on a series of real-world aerospace problems
Reliability of Bluetooth Technology for Travel Time Estimation
DEFF Research Database (Denmark)
Araghi, Bahar Namaki; Olesen, Jonas Hammershøj; Krishnan, Rajesh
2015-01-01
. However, their corresponding impacts on accuracy and reliability of estimated travel time have not been evaluated. In this study, a controlled field experiment is conducted to collect both Bluetooth and GPS data for 1000 trips to be used as the basis for evaluation. Data obtained by GPS logger is used...... to calculate actual travel time, referred to as ground truth, and to geo-code the Bluetooth detection events. In this setting, reliability is defined as the percentage of devices captured per trip during the experiment. It is found that, on average, Bluetooth-enabled devices will be detected 80% of the time......-range antennae detect Bluetooth-enabled devices in a closer location to the sensor, thus providing a more accurate travel time estimate. However, the smaller the size of the detection zone, the lower the penetration rate, which could itself influence the accuracy of estimates. Therefore, there has to be a trade...
A spectroscopic transfer standard for accurate atmospheric CO measurements
Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker
2016-04-01
Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been
Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics
Directory of Open Access Journals (Sweden)
Cecilia Noecker
2015-03-01
Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral
Inter-electrode delay estimators for electrohysterographic propagation analysis
International Nuclear Information System (INIS)
Rabotti, Chiara; Mischi, Massimo; Bergmans, Jan W M; Van Laar, Judith O E H; Oei, Guid S
2009-01-01
Premature birth is a major cause of mortality and permanent dysfunctions. Several parameters derived from single channel electrohysterographic (EHG) signals have been considered to determine contractions leading to preterm delivery. The results are promising, but improvements are needed. As effective uterine contractions result from a proper action potential propagation, in this paper we focus on the propagation properties of EHG signals, which can be predictive of preterm delivery. Two standard delay estimators, namely maximization of the cross-correlation function and spectral matching, are adapted and implemented for the assessment of inter-electrode delays of propagating EHG signals. The accuracy of the considered standard estimators might be hampered by a poor inter-channel correlation. An improved dedicated approach is therefore proposed. By simultaneous adaptive estimation of the volume conductor transfer function and the delay, a dedicated method is conceived for improving the inter-channel signal similarity during delay calculation. Furthermore, it provides delay estimates without resolution limits and it is suitable for low sampling rates, which are appropriate for EHG recording. The three estimators were evaluated on EHG signals recorded on seven women. The dedicated approach provided more accurate estimates due to a 22% improvement of the initial average inter-channel correlation
Development of computer program for estimating decommissioning cost - 59037
International Nuclear Information System (INIS)
Kim, Hak-Soo; Park, Jong-Kil
2012-01-01
The programs for estimating the decommissioning cost have been developed for many different purposes and applications. The estimation of decommissioning cost is required a large amount of data such as unit cost factors, plant area and its inventory, waste treatment, etc. These make it difficult to use manual calculation or typical spreadsheet software such as Microsoft Excel. The cost estimation for eventual decommissioning of nuclear power plants is a prerequisite for safe, timely and cost-effective decommissioning. To estimate the decommissioning cost more accurately and systematically, KHNP, Korea Hydro and Nuclear Power Co. Ltd, developed a decommissioning cost estimating computer program called 'DeCAT-Pro', which is Decommission-ing Cost Assessment Tool - Professional. (Hereinafter called 'DeCAT') This program allows users to easily assess the decommissioning cost with various decommissioning options. Also, this program provides detailed reporting for decommissioning funding requirements as well as providing detail project schedules, cash-flow, staffing plan and levels, and waste volumes by waste classifications and types. KHNP is planning to implement functions for estimating the plant inventory using 3-D technology and for classifying the conditions of radwaste disposal and transportation automatically. (authors)
Gomes, Zahra; Jarvis, Matt J.; Almosallam, Ibrahim A.; Roberts, Stephen J.
2018-03-01
The next generation of large-scale imaging surveys (such as those conducted with the Large Synoptic Survey Telescope and Euclid) will require accurate photometric redshifts in order to optimally extract cosmological information. Gaussian Process for photometric redshift estimation (GPZ) is a promising new method that has been proven to provide efficient, accurate photometric redshift estimations with reliable variance predictions. In this paper, we investigate a number of methods for improving the photometric redshift estimations obtained using GPZ (but which are also applicable to others). We use spectroscopy from the Galaxy and Mass Assembly Data Release 2 with a limiting magnitude of r Program Data Release 1 and find that it produces significant improvements in accuracy, similar to the effect of including additional features.
Accurate technique for complete geometric calibration of cone-beam computed tomography systems
International Nuclear Information System (INIS)
Cho Youngbin; Moseley, Douglas J.; Siewerdsen, Jeffrey H.; Jaffray, David A.
2005-01-01
Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the det