WorldWideScience

Sample records for test accurately estimates

  1. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  2. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections

    Science.gov (United States)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.

    2017-01-01

    Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified

  3. Portable, accurate toxicity testing

    International Nuclear Information System (INIS)

    Sabate, R.W.; Stiffey, A.V.; Dewailly, E.L.; Hinds, A.A.; Vieaux, G.J.

    1994-01-01

    Ever tightening environmental regulations, severe penalties for non-compliance, and expensive remediation costs have stimulated development of methods to detect and measure toxins. Most of these methods are bioassays that must be performed in the laboratory; none previously devised has been truly portable. The US Army, through the Small Business Innovative Research program, has developed a hand-held, field deployable unit for testing toxicity of battlefield water supplies. This patented system employs the measurable quenching, in the presence of toxins, of the natural bioluminescence produced by the marine dinoflagellate alga Pyrocystis lunula. The procedure's inventor used it for years to measure toxicity concentrations of chemical warfare agents actually, their simulants, primarily in the form of pesticides and herbicides plus assorted toxic reagents, waterbottom samples, drilling fluids, even blood. While the procedure is more precise, cheaper, and faster than most bioassays, until recently it was immobile. Now it is deployable in the field. The laboratory apparatus has been proven to be sensitive to toxins in concentrations as low as a few parts per billion, repeatable within a variation of 10% or less, and unlike some other bioassays effective in turbid or colored media. The laboratory apparatus and the hand-held tester have been calibrated with the EPA protocol that uses the shrimplike Mysidopsis bahia. The test organism tolerates transportation well, but must be rested a few hours at the test site for regeneration of its light-producing powers. Toxicity now can be measured confidently in soils, water columns, discharge points, and many other media in situ. Most significant to the oil industry is that drilling fluids can be monitored continuously on the rig

  4. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  5. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  6. Accurate hydrocarbon estimates attained with radioactive isotope

    International Nuclear Information System (INIS)

    Hubbard, G.

    1983-01-01

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  7. Accurate control testing for clay liner permeability

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R J

    1991-08-01

    Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.

  8. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  9. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  10. Zadoff-Chu coded ultrasonic signal for accurate range estimation

    KAUST Repository

    AlSharif, Mohammed H.

    2017-11-02

    This paper presents a new adaptation of Zadoff-Chu sequences for the purpose of range estimation and movement tracking. The proposed method uses Zadoff-Chu sequences utilizing a wideband ultrasonic signal to estimate the range between two devices with very high accuracy and high update rate. This range estimation method is based on time of flight (TOF) estimation using cyclic cross correlation. The system was experimentally evaluated under different noise levels and multi-user interference scenarios. For a single user, the results show less than 7 mm error for 90% of range estimates in a typical indoor environment. Under the interference from three other users, the 90% error was less than 25 mm. The system provides high estimation update rate allowing accurate tracking of objects moving with high speed.

  11. Zadoff-Chu coded ultrasonic signal for accurate range estimation

    KAUST Repository

    AlSharif, Mohammed H.; Saad, Mohamed; Siala, Mohamed; Ballal, Tarig; Boujemaa, Hatem; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper presents a new adaptation of Zadoff-Chu sequences for the purpose of range estimation and movement tracking. The proposed method uses Zadoff-Chu sequences utilizing a wideband ultrasonic signal to estimate the range between two devices with very high accuracy and high update rate. This range estimation method is based on time of flight (TOF) estimation using cyclic cross correlation. The system was experimentally evaluated under different noise levels and multi-user interference scenarios. For a single user, the results show less than 7 mm error for 90% of range estimates in a typical indoor environment. Under the interference from three other users, the 90% error was less than 25 mm. The system provides high estimation update rate allowing accurate tracking of objects moving with high speed.

  12. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  13. Toward accurate and precise estimates of lion density.

    Science.gov (United States)

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  14. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  15. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  16. The description of a method for accurately estimating creatinine clearance in acute kidney injury.

    Science.gov (United States)

    Mellas, John

    2016-05-01

    Acute kidney injury (AKI) is a common and serious condition encountered in hospitalized patients. The severity of kidney injury is defined by the RIFLE, AKIN, and KDIGO criteria which attempt to establish the degree of renal impairment. The KDIGO guidelines state that the creatinine clearance should be measured whenever possible in AKI and that the serum creatinine concentration and creatinine clearance remain the best clinical indicators of renal function. Neither the RIFLE, AKIN, nor KDIGO criteria estimate actual creatinine clearance. Furthermore there are no accepted methods for accurately estimating creatinine clearance (K) in AKI. The present study describes a unique method for estimating K in AKI using urine creatinine excretion over an established time interval (E), an estimate of creatinine production over the same time interval (P), and the estimated static glomerular filtration rate (sGFR), at time zero, utilizing the CKD-EPI formula. Using these variables estimated creatinine clearance (Ke)=E/P * sGFR. The method was tested for validity using simulated patients where actual creatinine clearance (Ka) was compared to Ke in several patients, both male and female, and of various ages, body weights, and degrees of renal impairment. These measurements were made at several serum creatinine concentrations in an attempt to determine the accuracy of this method in the non-steady state. In addition E/P and Ke was calculated in hospitalized patients, with AKI, and seen in nephrology consultation by the author. In these patients the accuracy of the method was determined by looking at the following metrics; E/P>1, E/P1 and 0.907 (0.841, 0.973) for 0.95 ml/min accurately predicted the ability to terminate renal replacement therapy in AKI. Include the need to measure urine volume accurately. Furthermore the precision of the method requires accurate estimates of sGFR, while a reasonable measure of P is crucial to estimating Ke. The present study provides the

  17. Allele-sharing models: LOD scores and accurate linkage tests.

    Science.gov (United States)

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  18. Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  19. Interpretando correctamente en salud pública estimaciones puntuales, intervalos de confianza y contrastes de hipótesis Accurate interpretation of point estimates, confidence intervals, and hypothesis tests in public health

    Directory of Open Access Journals (Sweden)

    Manuel G Scotto

    2003-12-01

    Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.

  20. Can administrative health utilisation data provide an accurate diabetes prevalence estimate for a geographical region?

    Science.gov (United States)

    Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin

    2018-05-01

    To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    Science.gov (United States)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  2. Estimating uncertainty in resolution tests

    CSIR Research Space (South Africa)

    Goncalves, DP

    2006-05-01

    Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...

  3. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-03

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.

  4. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...

  5. Does bioelectrical impedance analysis accurately estimate the condition of threatened and endangered desert fish species?

    Science.gov (United States)

    Dibble, Kimberly L.; Yard, Micheal D.; Ward, David L.; Yackulic, Charles B.

    2017-01-01

    Bioelectrical impedance analysis (BIA) is a nonlethal tool with which to estimate the physiological condition of animals that has potential value in research on endangered species. However, the effectiveness of BIA varies by species, the methodology continues to be refined, and incidental mortality rates are unknown. Under laboratory conditions we tested the value of using BIA in addition to morphological measurements such as total length and wet mass to estimate proximate composition (lipid, protein, ash, water, dry mass, energy density) in the endangered Humpback Chub Gila cypha and Bonytail G. elegans and the species of concern Roundtail Chub G. robusta and conducted separate trials to estimate the mortality rates of these sensitive species. Although Humpback and Roundtail Chub exhibited no or low mortality in response to taking BIA measurements versus handling for length and wet-mass measurements, Bonytails exhibited 14% and 47% mortality in the BIA and handling experiments, respectively, indicating that survival following stress is species specific. Derived BIA measurements were included in the best models for most proximate components; however, the added value of BIA as a predictor was marginal except in the absence of accurate wet-mass data. Bioelectrical impedance analysis improved the R2 of the best percentage-based models by no more than 4% relative to models based on morphology. Simulated field conditions indicated that BIA models became increasingly better than morphometric models at estimating proximate composition as the observation error around wet-mass measurements increased. However, since the overall proportion of variance explained by percentage-based models was low and BIA was mostly a redundant predictor, we caution against the use of BIA in field applications for these sensitive fish species.

  6. Fast and Accurate Video PQoS Estimation over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Emanuele Viterbo

    2008-06-01

    Full Text Available This paper proposes a curve fitting technique for fast and accurate estimation of the perceived quality of streaming media contents, delivered within a wireless network. The model accounts for the effects of various network parameters such as congestion, radio link power, and video transmission bit rate. The evaluation of the perceived quality of service (PQoS is based on the well-known VQM objective metric, a powerful technique which is highly correlated to the more expensive and time consuming subjective metrics. Currently, PQoS is used only for offline analysis after delivery of the entire video content. Thanks to the proposed simple model, we can estimate in real time the video PQoS and we can rapidly adapt the content transmission through scalable video coding and bit rates in order to offer the best perceived quality to the end users. The designed model has been validated through many different measurements in realistic wireless environments using an ad hoc WiFi test bed.

  7. Accelerated spike resampling for accurate multiple testing controls.

    Science.gov (United States)

    Harrison, Matthew T

    2013-02-01

    Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.

  8. An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation

    Directory of Open Access Journals (Sweden)

    Senthil Kumar Murugesan

    2015-01-01

    Full Text Available Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.

  9. Accurate halo-galaxy mocks from automatic bias estimation and particle mesh gravity solvers

    Science.gov (United States)

    Vakili, Mohammadjavad; Kitaura, Francisco-Shu; Feng, Yu; Yepes, Gustavo; Zhao, Cheng; Chuang, Chia-Hsun; Hahn, ChangHoon

    2017-12-01

    Reliable extraction of cosmological information from clustering measurements of galaxy surveys requires estimation of the error covariance matrices of observables. The accuracy of covariance matrices is limited by our ability to generate sufficiently large number of independent mock catalogues that can describe the physics of galaxy clustering across a wide range of scales. Furthermore, galaxy mock catalogues are required to study systematics in galaxy surveys and to test analysis tools. In this investigation, we present a fast and accurate approach for generation of mock catalogues for the upcoming galaxy surveys. Our method relies on low-resolution approximate gravity solvers to simulate the large-scale dark matter field, which we then populate with haloes according to a flexible non-linear and stochastic bias model. In particular, we extend the PATCHY code with an efficient particle mesh algorithm to simulate the dark matter field (the FASTPM code), and with a robust MCMC method relying on the EMCEE code for constraining the parameters of the bias model. Using the haloes in the BigMultiDark high-resolution N-body simulation as a reference catalogue, we demonstrate that our technique can model the bivariate probability distribution function (counts-in-cells), power spectrum and bispectrum of haloes in the reference catalogue. Specifically, we show that the new ingredients permit us to reach percentage accuracy in the power spectrum up to k ∼ 0.4 h Mpc-1 (within 5 per cent up to k ∼ 0.6 h Mpc-1) with accurate bispectra improving previous results based on Lagrangian perturbation theory.

  10. A Trace Data-Based Approach for an Accurate Estimation of Precise Utilization Maps in LTE

    Directory of Open Access Journals (Sweden)

    Almudena Sánchez

    2017-01-01

    Full Text Available For network planning and optimization purposes, mobile operators make use of Key Performance Indicators (KPIs, computed from Performance Measurements (PMs, to determine whether network performance needs to be improved. In current networks, PMs, and therefore KPIs, suffer from lack of precision due to an insufficient temporal and/or spatial granularity. In this work, an automatic method, based on data traces, is proposed to improve the accuracy of radio network utilization measurements collected in a Long-Term Evolution (LTE network. The method’s output is an accurate estimate of the spatial and temporal distribution for the cell utilization ratio that can be extended to other indicators. The method can be used to improve automatic network planning and optimization algorithms in a centralized Self-Organizing Network (SON entity, since potential issues can be more precisely detected and located inside a cell thanks to temporal and spatial precision. The proposed method is tested with real connection traces gathered in a large geographical area of a live LTE network and considers overload problems due to trace file size limitations, which is a key consideration when analysing a large network. Results show how these distributions provide a very detailed information of network utilization, compared to cell based statistics.

  11. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    Science.gov (United States)

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of

  12. Accurate Fuel Estimates using CAN Bus Data and 3D Maps

    DEFF Research Database (Denmark)

    Andersen, Ove; Torp, Kristian

    2018-01-01

    The focus on reducing CO 2 emissions from the transport sector is larger than ever. Increasingly stricter reductions on fuel consumption and emissions are being introduced by the EU, e.g., to reduce the air pollution in many larger cities. Large sets of high-frequent GPS data from vehicles already...... the accuracy of fuel consumption estimates with up to 40% on hilly roads. There is only very little improvement of the high-precision (H3D) map over the simple 3D map. The fuel consumption estimates are most accurate on flat terrain with average fuel estimates of up to 99% accuracy. The fuel estimates are most...... exist. However, fuel consumption data is still rarely collected even though it is possible to measure the fuel consumption with high accuracy, e.g., using an OBD-II device and a smartphone. This paper, presents a method for comparing fuel-consumption estimates using the SIDRA TRIP model with real fuel...

  13. Accurate calibration of test mass displacement in the LIGO interferometers

    Energy Technology Data Exchange (ETDEWEB)

    Goetz, E [University of Michigan, Ann Arbor, MI 48109 (United States); Savage, R L Jr; Garofoli, J; Kawabe, K; Landry, M [LIGO Hanford Observatory, Richland, WA 99352 (United States); Gonzalez, G; Kissel, J; Sung, M [Louisiana State University, Baton Rouge, LA 70803 (United States); Hirose, E [Syracuse University, Syracuse, NY 13244 (United States); Kalmus, P [Columbia University, New York, NY 10027 (United States); O' Reilly, B; Stuver, A [LIGO Livingston Observatory, Livingston, LA 70754 (United States); Siemens, X, E-mail: egoetz@umich.ed, E-mail: savage_r@ligo-wa.caltech.ed [University of Wisconsin-Milwaukee, Milwaukee, WI 53201 (United States)

    2010-04-21

    We describe three fundamentally different methods we have applied to calibrate the test mass displacement actuators to search for systematic errors in the calibration of the LIGO gravitational-wave detectors. The actuation frequencies tested range from 90 Hz to 1 kHz and the actuation amplitudes range from 10{sup -6} m to 10{sup -18} m. For each of the four test mass actuators measured, the weighted mean coefficient over all frequencies for each technique deviates from the average actuation coefficient for all three techniques by less than 4%. This result indicates that systematic errors in the calibration of the responses of the LIGO detectors to differential length variations are within the stated uncertainties.

  14. Accurate calibration of test mass displacement in the LIGO interferometers

    International Nuclear Information System (INIS)

    Goetz, E; Savage, R L Jr; Garofoli, J; Kawabe, K; Landry, M; Gonzalez, G; Kissel, J; Sung, M; Hirose, E; Kalmus, P; O'Reilly, B; Stuver, A; Siemens, X

    2010-01-01

    We describe three fundamentally different methods we have applied to calibrate the test mass displacement actuators to search for systematic errors in the calibration of the LIGO gravitational-wave detectors. The actuation frequencies tested range from 90 Hz to 1 kHz and the actuation amplitudes range from 10 -6 m to 10 -18 m. For each of the four test mass actuators measured, the weighted mean coefficient over all frequencies for each technique deviates from the average actuation coefficient for all three techniques by less than 4%. This result indicates that systematic errors in the calibration of the responses of the LIGO detectors to differential length variations are within the stated uncertainties.

  15. Eddy covariance observations of methane and nitrous oxide emissions. Towards more accurate estimates from ecosystems

    International Nuclear Information System (INIS)

    Kroon, P.S.

    2010-09-01

    About 30% of the increased greenhouse gas (GHG) emissions of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) are related to land use changes and agricultural activities. In order to select effective measures, knowledge is required about GHG emissions from these ecosystems and how these emissions are influenced by management and meteorological conditions. Accurate emission values are therefore needed for all three GHGs to compile the full GHG balance. However, the current annual estimates of CH4 and N2O emissions from ecosystems have significant uncertainties, even larger than 50%. The present study showed that an advanced technique, micrometeorological eddy covariance flux technique, could obtain more accurate estimates with uncertainties even smaller than 10%. The current regional and global trace gas flux estimates of CH4 and N2O are possibly seriously underestimated due to incorrect measurement procedures. Accurate measurements of both gases are really important since they could even contribute for more than two-third to the total GHG emission. For example: the total GHG emission of a dairy farm site was estimated at 16.10 3 kg ha -1 yr -1 in CO2-equivalents from which 25% and 45% was contributed by CH4 and N2O, respectively. About 60% of the CH4 emission was emitted by ditches and their bordering edges. These emissions are not yet included in the national inventory reports. We recommend including these emissions in coming reports.

  16. Accurate dating with radiocarbon from the atom bomb tests

    CSIR Research Space (South Africa)

    Vogel

    2002-09-01

    Full Text Available The artificial radiocarbon produced by the thermonuclear bomb tests in the 1950s and 1960s significantly increased the level of C-14 in the environment. A detailed record of the subsequent changes in the C-14 concentration of the atmosphere can...

  17. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    Science.gov (United States)

    Hwang, Beomsoo; Jeon, Doyoung

    2015-04-09

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  18. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Directory of Open Access Journals (Sweden)

    Beomsoo Hwang

    2015-04-01

    Full Text Available In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  19. Serial fusion of Eulerian and Lagrangian approaches for accurate heart-rate estimation using face videos.

    Science.gov (United States)

    Gupta, Puneet; Bhowmick, Brojeshwar; Pal, Arpan

    2017-07-01

    Camera-equipped devices are ubiquitous and proliferating in the day-to-day life. Accurate heart rate (HR) estimation from the face videos acquired from the low cost cameras in a non-contact manner, can be used in many real-world scenarios and hence, require rigorous exploration. This paper has presented an accurate and near real-time HR estimation system using these face videos. It is based on the phenomenon that the color and motion variations in the face video are closely related to the heart beat. The variations also contain the noise due to facial expressions, respiration, eye blinking and environmental factors which are handled by the proposed system. Neither Eulerian nor Lagrangian temporal signals can provide accurate HR in all the cases. The cases where Eulerian temporal signals perform spuriously are determined using a novel poorness measure and then both the Eulerian and Lagrangian temporal signals are employed for better HR estimation. Such a fusion is referred as serial fusion. Experimental results reveal that the error introduced in the proposed algorithm is 1.8±3.6 which is significantly lower than the existing well known systems.

  20. The Remote Food Photography Method accurately estimates dry powdered foods—the source of calories for many infants

    Science.gov (United States)

    Duhé, Abby F.; Gilmore, L. Anne; Burton, Jeffrey H.; Martin, Corby K.; Redman, Leanne M.

    2016-01-01

    Background Infant formula is a major source of nutrition for infants with over half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water making it necessary to develop methods that can accurately estimate the amount of powder used prior to reconstitution. Objective To assess the use of the Remote Food Photography Method (RFPM) to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. Methods For each serving size (1-scoop, 2-scoop, 3-scoop, and 4-scoop), a set of seven test bottles and photographs were prepared including the recommended gram weight of powdered formula of the respective serving size by the manufacturer, three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended, and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard RFPM analysis procedures. The ratio estimates and the United States Department of Agriculture (USDA) data tables were used to generate food and nutrient information to provide the RFPM estimates. Statistical Analyses Performed Equivalence testing using the two one-sided t- test (TOST) approach was used to determine equivalence between the actual gram weights and the RFPM estimated weights for all samples, within each serving size, and within under-prepared and over-prepared bottles. Results For all bottles, the gram weights estimated by the RFPM were within 5% equivalence bounds with a slight under-estimation of 0.05 g (90% CI [−0.49, 0.40]; p<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. Conclusion The maximum observed mean error was an overestimation of 1.58% of powdered formula by the RFPM under

  1. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    Science.gov (United States)

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  2. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  3. The Remote Food Photography Method Accurately Estimates Dry Powdered Foods-The Source of Calories for Many Infants.

    Science.gov (United States)

    Duhé, Abby F; Gilmore, L Anne; Burton, Jeffrey H; Martin, Corby K; Redman, Leanne M

    2016-07-01

    Infant formula is a major source of nutrition for infants, with more than half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water, making it necessary to develop methods that can accurately estimate the amount of powder used before reconstitution. Our aim was to assess the use of the Remote Food Photography Method to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. For each serving size (1 scoop, 2 scoops, 3 scoops, and 4 scoops), a set of seven test bottles and photographs were prepared as follow: recommended gram weight of powdered formula of the respective serving size by the manufacturer; three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended; and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard Remote Food Photography Method analysis procedures. The ratio estimates and the US Department of Agriculture data tables were used to generate food and nutrient information to provide the Remote Food Photography Method estimates. Equivalence testing using the two one-sided t tests approach was used to determine equivalence between the actual gram weights and the Remote Food Photography Method estimated weights for all samples, within each serving size, and within underprepared and overprepared bottles. For all bottles, the gram weights estimated by the Remote Food Photography Method were within 5% equivalence bounds with a slight underestimation of 0.05 g (90% CI -0.49 to 0.40; P<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. The maximum observed mean error was an overestimation of 1.58% of powdered formula by the Remote

  4. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    Science.gov (United States)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  5. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B; Wahl, R L

    2011-01-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  6. Rapid and accurate species tree estimation for phylogeographic investigations using replicated subsampling.

    Science.gov (United States)

    Hird, Sarah; Kubatko, Laura; Carstens, Bryan

    2010-11-01

    We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods

    International Nuclear Information System (INIS)

    Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris

    2016-01-01

    Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.

  8. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  9. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  10. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece; Rashid, Mamoon; Pain, Arnab

    2012-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  11. Shear-wave elastography contributes to accurate tumour size estimation when assessing small breast cancers

    International Nuclear Information System (INIS)

    Mullen, R.; Thompson, J.M.; Moussa, O.; Vinnicombe, S.; Evans, A.

    2014-01-01

    Aim: To assess whether the size of peritumoural stiffness (PTS) on shear-wave elastography (SWE) for small primary breast cancers (≤15 mm) was associated with size discrepancies between grey-scale ultrasound (GSUS) and final histological size and whether the addition of PTS size to GSUS size might result in more accurate tumour size estimation when compared to final histological size. Materials and methods: A retrospective analysis of 86 consecutive patients between August 2011 and February 2013 who underwent breast-conserving surgery for tumours of size ≤15 mm at ultrasound was carried out. The size of PTS stiffness was compared to mean GSUS size, mean histological size, and the extent of size discrepancy between GSUS and histology. PTS size and GSUS were combined and compared to the final histological size. Results: PTS of >3 mm was associated with a larger mean final histological size (16 versus 11.3 mm, p < 0.001). PTS size of >3 mm was associated with a higher frequency of underestimation of final histological size by GSUS of >5 mm (63% versus 18%, p < 0.001). The combination of PTS and GSUS size led to accurate estimation of the final histological size (p = 0.03). The size of PTS was not associated with margin involvement (p = 0.27). Conclusion: PTS extending beyond 3 mm from the grey-scale abnormality is significantly associated with underestimation of tumour size of >5 mm for small invasive breast cancers. Taking into account the size of PTS also led to accurate estimation of the final histological size. Further studies are required to assess the relationship of the extent of SWE stiffness and margin status. - Highlights: • Peritumoural stiffness of greater than 3 mm was associated with larger tumour size. • Underestimation of tumour size by ultrasound was associated with peri-tumoural stiffness size. • Combining peri-tumoural stiffness size to ultrasound produced accurate tumour size estimation

  12. The tap test- an accurate First-line test for fetal lung maturity testing ...

    African Journals Online (AJOL)

    Objective. To determine the accuracy of near-patient and laboratory- based fetal lung maturity tests in predicting the need for neonatal ventilation. Design. A prospective descriptive study. Subjects. One hundred high-risk obstetric patients where confirmation of fetal lung maturity would initiate delivery. Methods. Fetal weight ...

  13. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  14. Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model

    Science.gov (United States)

    Ahlgren, K.; Li, X.

    2017-12-01

    Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model

  15. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    International Nuclear Information System (INIS)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup

    2015-01-01

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF

  16. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-02-15

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.

  17. GPS Water Vapor Tomography Based on Accurate Estimations of the GPS Tropospheric Parameters

    Science.gov (United States)

    Champollion, C.; Masson, F.; Bock, O.; Bouin, M.; Walpersdorf, A.; Doerflinger, E.; van Baelen, J.; Brenot, H.

    2003-12-01

    The Global Positioning System (GPS) is now a common technique for the retrieval of zenithal integrated water vapor (IWV). Further applications in meteorology need also slant integrated water vapor (SIWV) which allow to precisely define the high variability of tropospheric water vapor at different temporal and spatial scales. Only precise estimations of IWV and horizontal gradients allow the estimation of accurate SIWV. We present studies developed to improve the estimation of tropospheric water vapor from GPS data. Results are obtained from several field experiments (MAP, ESCOMPTE, OHM-CV, IHOP, .). First IWV are estimated using different GPS processing strategies and results are compared to radiosondes. The role of the reference frame and the a priori constraints on the coordinates of the fiducial and local stations is generally underestimated. It seems to be of first order in the estimation of the IWV. Second we validate the estimated horizontal gradients comparing zenith delay gradients and single site gradients. IWV, gradients and post-fit residuals are used to construct slant integrated water delays. Validation of the SIWV is under progress comparing GPS SIWV, Lidar measurements and high resolution meteorological models (Meso-NH). A careful analysis of the post-fit residuals is needed to separate tropospheric signal from multipaths. The slant tropospheric delays are used to study the 3D heterogeneity of the troposphere. We develop a tomographic software to model the three-dimensional distribution of the tropospheric water vapor from GPS data. The software is applied to the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers operated in southern France. Three inversions have been successfully compared to three successive radiosonde launches. Good resolution is obtained up to heights of 3000 m.

  18. Are rapid population estimates accurate? A field trial of two different assessment methods.

    Science.gov (United States)

    Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent

    2006-09-01

    Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.

  19. Magnetic dipole moment estimation and compensation for an accurate attitude control in nano-satellite missions

    Science.gov (United States)

    Inamori, Takaya; Sako, Nobutada; Nakasuka, Shinichi

    2011-06-01

    Nano-satellites provide space access to broader range of satellite developers and attract interests as an application of the space developments. These days several new nano-satellite missions are proposed with sophisticated objectives such as remote-sensing and observation of astronomical objects. In these advanced missions, some nano-satellites must meet strict attitude requirements for obtaining scientific data or images. For LEO nano-satellite, a magnetic attitude disturbance dominates over other environmental disturbances as a result of small moment of inertia, and this effect should be cancelled for a precise attitude control. This research focuses on how to cancel the magnetic disturbance in orbit. This paper presents a unique method to estimate and compensate the residual magnetic moment, which interacts with the geomagnetic field and causes the magnetic disturbance. An extended Kalman filter is used to estimate the magnetic disturbance. For more practical considerations of the magnetic disturbance compensation, this method has been examined in the PRISM (Pico-satellite for Remote-sensing and Innovative Space Missions). This method will be also used for a nano-astrometry satellite mission. This paper concludes that use of the magnetic disturbance estimation and compensation are useful for nano-satellites missions which require a high accurate attitude control.

  20. Accurate estimation of the RMS emittance from single current amplifier data

    International Nuclear Information System (INIS)

    Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.

    2002-01-01

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H - ion source

  1. Accurate estimation of motion blur parameters in noisy remote sensing image

    Science.gov (United States)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  2. Characterization of a signal recording system for accurate velocity estimation using a VISAR

    Science.gov (United States)

    Rav, Amit; Joshi, K. D.; Singh, Kulbhushan; Kaushik, T. C.

    2018-02-01

    The linearity of a signal recording system (SRS) in time as well as in amplitude are important for the accurate estimation of the free surface velocity history of a moving target during shock loading and unloading when measured using optical interferometers such as a velocity interferometer system for any reflector (VISAR). Signal recording being the first step in a long sequence of signal processes, the incorporation of errors due to nonlinearity, and low signal-to-noise ratio (SNR) affects the overall accuracy and precision of the estimation of velocity history. In shock experiments the small duration (a few µs) of loading/unloading, the reflectivity of moving target surface, and the properties of optical components, control the amount of input of light to the SRS of a VISAR and this in turn affects the linearity and SNR of the overall measurement. These factors make it essential to develop in situ procedures for (i) minimizing the effect of signal induced noise and (ii) determine the linear region of operation for the SRS. Here we report on a procedure for the optimization of SRS parameters such as photodetector gain, optical power, aperture etc, so as to achieve a linear region of operation with a high SNR. The linear region of operation so determined has been utilized successfully to estimate the temporal history of the free surface velocity of the moving target in shock experiments.

  3. On Estimation and Testing for Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Jordanova, P.; Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2013-01-01

    Roč. 22, č. 1 (2013), s. 89-108 ISSN 0204-9805 Institutional support: RVO:67985807 Keywords : testing against heavy tails * asymptotic properties of estimators * point estimation Subject RIV: BB - Applied Statistics, Operational Research

  4. An Accurate Estimate of the Free Energy and Phase Diagram of All-DNA Bulk Fluids

    Directory of Open Access Journals (Sweden)

    Emanuele Locatelli

    2018-04-01

    Full Text Available We present a numerical study in which large-scale bulk simulations of self-assembled DNA constructs have been carried out with a realistic coarse-grained model. The investigation aims at obtaining a precise, albeit numerically demanding, estimate of the free energy for such systems. We then, in turn, use these accurate results to validate a recently proposed theoretical approach that builds on a liquid-state theory, the Wertheim theory, to compute the phase diagram of all-DNA fluids. This hybrid theoretical/numerical approach, based on the lowest-order virial expansion and on a nearest-neighbor DNA model, can provide, in an undemanding way, a parameter-free thermodynamic description of DNA associating fluids that is in semi-quantitative agreement with experiments. We show that the predictions of the scheme are as accurate as those obtained with more sophisticated methods. We also demonstrate the flexibility of the approach by incorporating non-trivial additional contributions that go beyond the nearest-neighbor model to compute the DNA hybridization free energy.

  5. SpotCaliper: fast wavelet-based spot detection with accurate size estimation.

    Science.gov (United States)

    Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael

    2016-04-15

    SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  7. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Science.gov (United States)

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  8. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Directory of Open Access Journals (Sweden)

    Alessandro Saccà

    Full Text Available Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  9. Spot urine sodium measurements do not accurately estimate dietary sodium intake in chronic kidney disease12

    Science.gov (United States)

    Dougher, Carly E; Rifkin, Dena E; Anderson, Cheryl AM; Smits, Gerard; Persky, Martha S; Block, Geoffrey A; Ix, Joachim H

    2016-01-01

    Background: Sodium intake influences blood pressure and proteinuria, yet the impact on long-term outcomes is uncertain in chronic kidney disease (CKD). Accurate assessment is essential for clinical and public policy recommendations, but few large-scale studies use 24-h urine collections. Recent studies that used spot urine sodium and associated estimating equations suggest that they may provide a suitable alternative, but their accuracy in patients with CKD is unknown. Objective: We compared the accuracy of 4 equations [the Nerbass, INTERSALT (International Cooperative Study on Salt, Other Factors, and Blood Pressure), Tanaka, and Kawasaki equations] that use spot urine sodium to estimate 24-h sodium excretion in patients with moderate to advanced CKD. Design: We evaluated the accuracy of spot urine sodium to predict mean 24-h urine sodium excretion over 9 mo in 129 participants with stage 3–4 CKD. Spot morning urine sodium was used in 4 estimating equations. Bias, precision, and accuracy were assessed and compared across each equation. Results: The mean age of the participants was 67 y, 52% were female, and the mean estimated glomerular filtration rate was 31 ± 9 mL · min–1 · 1.73 m–2. The mean ± SD number of 24-h urine collections was 3.5 ± 0.8/participant, and the mean 24-h sodium excretion was 168.2 ± 67.5 mmol/d. Although the Tanaka equation demonstrated the least bias (mean: −8.2 mmol/d), all 4 equations had poor precision and accuracy. The INTERSALT equation demonstrated the highest accuracy but derived an estimate only within 30% of mean measured sodium excretion in only 57% of observations. Bland-Altman plots revealed systematic bias with the Nerbass, INTERSALT, and Tanaka equations, underestimating sodium excretion when intake was high. Conclusion: These findings do not support the use of spot urine specimens to estimate dietary sodium intake in patients with CKD and research studies enriched with patients with CKD. The parent data for this

  10. Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome

    Science.gov (United States)

    Teschendorff, Andrew E.; Enver, Tariq

    2017-01-01

    The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes. PMID:28569836

  11. A practical way to estimate retail tobacco sales violation rates more accurately.

    Science.gov (United States)

    Levinson, Arnold H; Patnaik, Jennifer L

    2013-11-01

    U.S. states annually estimate retailer propensity to sell adolescents cigarettes, which is a violation of law, by staging a single purchase attempt among a random sample of tobacco businesses. The accuracy of single-visit estimates is unknown. We examined this question using a novel test-retest protocol. Supervised minors attempted to purchase cigarettes at all retail tobacco businesses located in 3 Colorado counties. The attempts observed federal standards: Minors were aged 15-16 years, were nonsmokers, and were free of visible tattoos and piercings, and were allowed to enter stores alone or in pairs to purchase a small item while asking for cigarettes and to show or not show genuine identification (ID, e.g., driver's license). Unlike federal standards, stores received a second purchase attempt within a few days unless minors were firmly told not to return. Separate violation rates were calculated for first visits, second visits, and either visit. Eleven minors attempted to purchase cigarettes 1,079 times from 671 retail businesses. One sixth of first visits (16.8%) resulted in a violation; the rate was similar for second visits (15.7%). Considering either visit, 25.3% of businesses failed the test. Factors predictive of violation were whether clerks asked for ID, whether the clerks closely examined IDs, and whether minors included snacks or soft drinks in cigarette purchase attempts. A test-retest protocol for estimating underage cigarette sales detected half again as many businesses in violation as the federally approved one-test protocol. Federal policy makers should consider using the test-retest protocol to increase accuracy and awareness of widespread adolescent access to cigarettes through retail businesses.

  12. Modeling Site Heterogeneity with Posterior Mean Site Frequency Profiles Accelerates Accurate Phylogenomic Estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Minh, Bui Quang; Susko, Edward; Roger, Andrew J

    2018-03-01

    Proteins have distinct structural and functional constraints at different sites that lead to site-specific preferences for particular amino acid residues as the sequences evolve. Heterogeneity in the amino acid substitution process between sites is not modeled by commonly used empirical amino acid exchange matrices. Such model misspecification can lead to artefacts in phylogenetic estimation such as long-branch attraction. Although sophisticated site-heterogeneous mixture models have been developed to address this problem in both Bayesian and maximum likelihood (ML) frameworks, their formidable computational time and memory usage severely limits their use in large phylogenomic analyses. Here we propose a posterior mean site frequency (PMSF) method as a rapid and efficient approximation to full empirical profile mixture models for ML analysis. The PMSF approach assigns a conditional mean amino acid frequency profile to each site calculated based on a mixture model fitted to the data using a preliminary guide tree. These PMSF profiles can then be used for in-depth tree-searching in place of the full mixture model. Compared with widely used empirical mixture models with $k$ classes, our implementation of PMSF in IQ-TREE (http://www.iqtree.org) speeds up the computation by approximately $k$/1.5-fold and requires a small fraction of the RAM. Furthermore, this speedup allows, for the first time, full nonparametric bootstrap analyses to be conducted under complex site-heterogeneous models on large concatenated data matrices. Our simulations and empirical data analyses demonstrate that PMSF can effectively ameliorate long-branch attraction artefacts. In some empirical and simulation settings PMSF provided more accurate estimates of phylogenies than the mixture models from which they derive.

  13. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    Science.gov (United States)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  14. How accurately can we estimate energetic costs in a marine top predator, the king penguin?

    Science.gov (United States)

    Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J

    2007-01-01

    King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.

  15. Fast and accurate phylogenetic reconstruction from high-resolution whole-genome data and a novel robustness estimator.

    Science.gov (United States)

    Lin, Y; Rajan, V; Moret, B M E

    2011-09-01

    The rapid accumulation of whole-genome data has renewed interest in the study of genomic rearrangements. Comparative genomics, evolutionary biology, and cancer research all require models and algorithms to elucidate the mechanisms, history, and consequences of these rearrangements. However, even simple models lead to NP-hard problems, particularly in the area of phylogenetic analysis. Current approaches are limited to small collections of genomes and low-resolution data (typically a few hundred syntenic blocks). Moreover, whereas phylogenetic analyses from sequence data are deemed incomplete unless bootstrapping scores (a measure of confidence) are given for each tree edge, no equivalent to bootstrapping exists for rearrangement-based phylogenetic analysis. We describe a fast and accurate algorithm for rearrangement analysis that scales up, in both time and accuracy, to modern high-resolution genomic data. We also describe a novel approach to estimate the robustness of results-an equivalent to the bootstrapping analysis used in sequence-based phylogenetic reconstruction. We present the results of extensive testing on both simulated and real data showing that our algorithm returns very accurate results, while scaling linearly with the size of the genomes and cubically with their number. We also present extensive experimental results showing that our approach to robustness testing provides excellent estimates of confidence, which, moreover, can be tuned to trade off thresholds between false positives and false negatives. Together, these two novel approaches enable us to attack heretofore intractable problems, such as phylogenetic inference for high-resolution vertebrate genomes, as we demonstrate on a set of six vertebrate genomes with 8,380 syntenic blocks. A copy of the software is available on demand.

  16. On Preliminary Test Estimator for Median

    OpenAIRE

    Okazaki, Takeo; 岡崎, 威生

    1990-01-01

    The purpose of the present paper is to discuss about estimation of median with a preliminary test. Two procedures are presented, one uses Median test and the other uses Wilcoxon two-sample test for the preliminary test. Sections 3 and 4 give mathematical formulations of such properties, including mean square errors with one specified case. Section 5 discusses their optimal significance levels of the preliminary test and proposes their numerical values by Monte Carlo method. In addition to mea...

  17. Towards a less costly but accurate test of gastric emptying and small bowel transit

    Energy Technology Data Exchange (ETDEWEB)

    Camilleri, M.; Zinsmeister, A.R.; Greydanus, M.P.; Brown, M.L.; Proano, M. (Mayo Clinic and Foundation, Rochester, MN (USA))

    1991-05-01

    Our aim is to develop a less costly but accurate test of stomach emptying and small bowel transit by utilizing selected scintigraphic observations 1-6 hr after ingestion of a radiolabeled solid meal. These selected data were compared with more detailed analyses that require multiple scans and labor-intensive technical support. A logistic discriminant analysis was used to estimate the sensitivity and specificity of selected summaries of scintigraphic transit measurements. We studied 14 patients with motility disorders (eight neuropathic and six myopathic, confirmed by standard gastrointestinal manometry) and 37 healthy subjects. The patient group had abnormal gastric emptying (GE) and small bowel transit time (SBTT). The proportion of radiolabel retained in the stomach from 2 to 4 hr (GE 2 hr, GE 3 hr, GE 4 hr), as well as the proportion filling the colon at 4 and 6 hr (CF 4 hr, CF 6 hr) were individually able to differentiate health from disease (P less than 0.05 for each). From the logistic discriminant model, an estimated sensitivity of 93% resulted in similar specificities for detailed and selected transit parameters for gastric emptying (range: 62-70%). Similarly, combining selected observations, such as GE 4 hr with CF 6 hr, had a specificity of 76%, which was similar to the specificity of combinations of more detailed analyses. Based on the present studies and future confirmation in a larger number of patients, including those with less severe motility disorders, the 2-, 4-, and 6-hr scans with quantitation of proportions of counts in stomach and colon should provide a useful, relatively inexpensive strategy to identify and monitor motility disorders in clinical and epidemiologic studies.

  18. Plant DNA barcodes can accurately estimate species richness in poorly known floras.

    Directory of Open Access Journals (Sweden)

    Craig Costion

    Full Text Available BACKGROUND: Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70% and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. METHODOLOGY/PRINCIPAL FINDINGS: Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. CONCLUSIONS/SIGNIFICANCE: We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.

  19. Plant DNA barcodes can accurately estimate species richness in poorly known floras.

    Science.gov (United States)

    Costion, Craig; Ford, Andrew; Cross, Hugh; Crayn, Darren; Harrington, Mark; Lowe, Andrew

    2011-01-01

    Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70%) and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.

  20. Reliability Estimation Based Upon Test Plan Results

    National Research Council Canada - National Science Library

    Read, Robert

    1997-01-01

    The report contains a brief summary of aspects of the Maximus reliability point and interval estimation technique as it has been applied to the reliability of a device whose surveillance tests contain...

  1. Challenges associated with drunk driving measurement: combining police and self-reported data to estimate an accurate prevalence in Brazil.

    Science.gov (United States)

    Sousa, Tanara; Lunnen, Jeffrey C; Gonçalves, Veralice; Schmitz, Aurinez; Pasa, Graciela; Bastos, Tamires; Sripad, Pooja; Chandran, Aruna; Pechansky, Flavio

    2013-12-01

    Drunk driving is an important risk factor for road traffic crashes, injuries and deaths. After June 2008, all drivers in Brazil were subject to a "Zero Tolerance Law" with a set breath alcohol concentration of 0.1 mg/L of air. However, a loophole in this law enabled drivers to refuse breath or blood alcohol testing as it may self-incriminate. The reported prevalence of drunk driving is therefore likely a gross underestimate in many cities. To compare the prevalence of drunk driving gathered from police reports to the prevalence gathered from self-reported questionnaires administered at police sobriety roadblocks in two Brazilian capital cities, and to estimate a more accurate prevalence of drunk driving utilizing three correction techniques based upon information from those questionnaires. In August 2011 and January-February 2012, researchers from the Centre for Drug and Alcohol Research at the Universidade Federal do Rio Grande do Sul administered a roadside interview on drunk driving practices to 805 voluntary participants in the Brazilian capital cities of Palmas and Teresina. Three techniques which include measures such as the number of persons reporting alcohol consumption in the last six hours but who had refused breath testing were used to estimate the prevalence of drunk driving. The prevalence of persons testing positive for alcohol on their breath was 8.8% and 5.0% in Palmas and Teresina respectively. Utilizing a correction technique we calculated that a more accurate prevalence in these sites may be as high as 28.2% and 28.7%. In both cities, about 60% of drivers who self-reported having drank within six hours of being stopped by the police either refused to perform breathalyser testing; fled the sobriety roadblock; or were not offered the test, compared to about 30% of drivers that said they had not been drinking. Despite the reduction of the legal limit for drunk driving stipulated by the "Zero Tolerance Law," loopholes in the legislation permit many

  2. The importance of accurate meteorological input fields and accurate planetary boundary layer parameterizations, tested against ETEX-1

    International Nuclear Information System (INIS)

    Brandt, J.; Ebel, A.; Elbern, H.; Jakobs, H.; Memmesheimer, M.; Mikkelsen, T.; Thykier-Nielsen, S.; Zlatev, Z.

    1997-01-01

    Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)

  3. Accurate estimation of short read mapping quality for next-generation genome sequencing

    Science.gov (United States)

    Ruffalo, Matthew; Koyutürk, Mehmet; Ray, Soumya; LaFramboise, Thomas

    2012-01-01

    Motivation: Several software tools specialize in the alignment of short next-generation sequencing reads to a reference sequence. Some of these tools report a mapping quality score for each alignment—in principle, this quality score tells researchers the likelihood that the alignment is correct. However, the reported mapping quality often correlates weakly with actual accuracy and the qualities of many mappings are underestimated, encouraging the researchers to discard correct mappings. Further, these low-quality mappings tend to correlate with variations in the genome (both single nucleotide and structural), and such mappings are important in accurately identifying genomic variants. Approach: We develop a machine learning tool, LoQuM (LOgistic regression tool for calibrating the Quality of short read mappings, to assign reliable mapping quality scores to mappings of Illumina reads returned by any alignment tool. LoQuM uses statistics on the read (base quality scores reported by the sequencer) and the alignment (number of matches, mismatches and deletions, mapping quality score returned by the alignment tool, if available, and number of mappings) as features for classification and uses simulated reads to learn a logistic regression model that relates these features to actual mapping quality. Results: We test the predictions of LoQuM on an independent dataset generated by the ART short read simulation software and observe that LoQuM can ‘resurrect’ many mappings that are assigned zero quality scores by the alignment tools and are therefore likely to be discarded by researchers. We also observe that the recalibration of mapping quality scores greatly enhances the precision of called single nucleotide polymorphisms. Availability: LoQuM is available as open source at http://compbio.case.edu/loqum/. Contact: matthew.ruffalo@case.edu. PMID:22962451

  4. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  5. Accurate anisotropic material modelling using only tensile tests for hot and cold forming

    Science.gov (United States)

    Abspoel, M.; Scholting, M. E.; Lansbergen, M.; Neelis, B. M.

    2017-09-01

    Accurate material data for simulations require a lot of effort. Advanced yield loci require many different kinds of tests and a Forming Limit Curve (FLC) needs a large amount of samples. Many people use simple material models to reduce the effort of testing, however some models are either not accurate enough (i.e. Hill’48), or do not describe new types of materials (i.e. Keeler). Advanced yield loci describe the anisotropic materials behaviour accurately, but are not widely adopted because of the specialized tests, and data post-processing is a hurdle for many. To overcome these issues, correlations between the advanced yield locus points (biaxial, plane strain and shear) and mechanical properties have been investigated. This resulted in accurate prediction of the advanced stress points using only Rm, Ag and r-values in three directions from which a Vegter yield locus can be constructed with low effort. FLC’s can be predicted with the equations of Abspoel & Scholting depending on total elongation A80, r-value and thickness. Both predictive methods are initially developed for steel, aluminium and stainless steel (BCC and FCC materials). The validity of the predicted Vegter yield locus is investigated with simulation and measurements on both hot and cold formed parts and compared with Hill’48. An adapted specimen geometry, to ensure a homogeneous temperature distribution in the Gleeble hot tensile test, was used to measure the mechanical properties needed to predict a hot Vegter yield locus. Since for hot material, testing of stress states other than uniaxial is really challenging, the prediction for the yield locus adds a lot of value. For the hot FLC an A80 sample with a homogeneous temperature distribution is needed which is due to size limitations not possible in the Gleeble tensile tester. Heating the sample in an industrial type furnace and tensile testing it in a dedicated device is a good alternative to determine the necessary parameters for the FLC

  6. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    Science.gov (United States)

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    In this abstract a study on the influence of wind to model the PV module temperature is presented. This study is carried out in the framework of the PV-Alps INTERREG project in which the potential of different photovoltaic technologies is analysed for alpine regions. The PV module temperature depends on different parameters, such as ambient temperature, irradiance, wind speed and PV technology [1]. In most models, a very simple approach is used, where the PV module temperature is calculated from NOCT (nominal operating cell temperature), ambient temperature and irradiance alone [2]. In this study the influence of wind speed on the PV module temperature was investigated. First, different approaches suggested by various authors were tested [1], [2], [3], [4], [5]. For our analysis, temperature, irradiance and wind data from a PV test facility at the airport Bolzano (South Tyrol, Italy) from the EURAC Institute of Renewable Energies were used. The PV module temperature was calculated with different models and compared to the measured PV module temperature at the single panels. The best results were achieved with the approach suggested by Skoplaki et al. [1]. Preliminary results indicate that for all PV technologies which were tested (monocrystalline, amorphous, microcrystalline and polycrystalline silicon and cadmium telluride), modelled and measured PV module temperatures show a higher agreement (RMSE about 3-4 K) compared to standard approaches in which wind is not considered. For further investigation the in-situ measured wind velocities were replaced with wind data from numerical weather forecast models (ECMWF, reanalysis fields). Our results show that the PV module temperature calculated with wind data from ECMWF is still in very good agreement with the measured one (R² > 0.9 for all technologies). Compared to the previous analysis, we find comparable mean values and an increasing standard deviation. These results open a promising approach for PV module

  7. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using...

  8. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    Directory of Open Access Journals (Sweden)

    Manuel Gil

    2014-09-01

    Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  9. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    Science.gov (United States)

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  10. Multifrequency Excitation Method for Rapid and Accurate Dynamic Test of Micromachined Gyroscope Chips

    Directory of Open Access Journals (Sweden)

    Yan Deng

    2014-10-01

    Full Text Available A novel multifrequency excitation (MFE method is proposed to realize rapid and accurate dynamic testing of micromachined gyroscope chips. Compared with the traditional sweep-frequency excitation (SFE method, the computational time for testing one chip under four modes at a 1-Hz frequency resolution and 600-Hz bandwidth was dramatically reduced from 10 min to 6 s. A multifrequency signal with an equal amplitude and initial linear-phase-difference distribution was generated to ensure test repeatability and accuracy. The current test system based on LabVIEW using the SFE method was modified to use the MFE method without any hardware changes. The experimental results verified that the MFE method can be an ideal solution for large-scale dynamic testing of gyroscope chips and gyroscopes.

  11. Development of Star Tracker System for Accurate Estimation of Spacecraft Attitude

    Science.gov (United States)

    2009-12-01

    For a high- cost spacecraft with accurate pointing requirements, the use of a star tracker is the preferred method for attitude determination. The...solutions, however there are certain costs with using this algorithm. There are significantly more features a triangle can provide when compared to an...to the other. The non-rotating geocentric equatorial frame provides an inertial frame for the two-body problem of a satellite in orbit. In this

  12. Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)

    2017-07-15

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  13. An accurate estimation and optimization of bottom hole back pressure in managed pressure drilling

    Directory of Open Access Journals (Sweden)

    Boniface Aleruchi ORIJI

    2017-06-01

    Full Text Available Managed Pressure Drilling (MPD utilizes a method of applying back pressure to compensate for wellbore pressure losses during drilling. Using a single rheological (Annular Frictional Pressure Losses, AFPL model to estimate the backpressure in MPD operations for all sections of the well may not yield the best result. Each section of the hole was therefore treated independently in this study as data from a case study well were used. As the backpressure is a function of hydrostatic pressure, pore pressure and AFPL, three AFPL models (Bingham plastic, Power law and Herschel Bulkley models were utilized in estimating the backpressure. The estimated backpressure values were compared to the actual field backpressure values in order to obtain the optimum backpressure at the various well depths. The backpressure values estimated by utilizing the power law AFPL model gave the best result for the 12 1/4" hole section (average error % of 1.855% while the back pressures estimated by utilizing the Herschel Bulkley AFPL model gave the best result for the 8 1/2" hole section (average error % of 12.3%. The study showed that for hole sections of turbulent annular flow, the power law AFPL model fits best for estimating the required backpressure while for hole sections of laminar annular flow, the Herschel Bulkley AFPL model fits best for estimating the required backpressure.

  14. Existing equations to estimate lean body mass are not accurate in the critically ill: Results of a multicenter observational study.

    Science.gov (United States)

    Moisey, Lesley L; Mourtzakis, Marina; Kozar, Rosemary A; Compher, Charlene; Heyland, Daren K

    2017-12-01

    Lean body mass (LBM), quantified using computed tomography (CT), is a significant predictor of clinical outcomes in the critically ill. While CT analysis is precise and accurate in measuring body composition, it may not be practical or readily accessible to all patients in the intensive care unit (ICU). Here, we assessed the agreement between LBM measured by CT and four previously developed equations that predict LBM using variables (i.e. age, sex, weight, height) commonly recorded in the ICU. LBM was calculated in 327 critically ill adults using CT scans, taken at ICU admission, and 4 predictive equations (E1-4) that were derived from non-critically adults since there are no ICU-specific equations. Agreement was assessed using paired t-tests, Pearson's correlation coefficients and Bland-Altman plots. Median LBM calculated by CT was 45 kg (IQR 37-53 kg) and was significantly different (p LBM (error ranged from 7.5 to 9.9 kg), compared with LBM calculated by CT, suggesting insufficient agreement. Our data indicates a large bias is present between the calculation of LBM by CT imaging and the predictive equations that have been compared here. This underscores the need for future research toward the development of ICU-specific equations that reliably estimate LBM in a practical and cost-effective manner. Copyright © 2016 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  15. Fast, accurate, and robust frequency offset estimation based on modified adaptive Kalman filter in coherent optical communication system

    Science.gov (United States)

    Yang, Yanfu; Xiang, Qian; Zhang, Qun; Zhou, Zhongqing; Jiang, Wen; He, Qianwen; Yao, Yong

    2017-09-01

    We propose a joint estimation scheme for fast, accurate, and robust frequency offset (FO) estimation along with phase estimation based on modified adaptive Kalman filter (MAKF). The scheme consists of three key modules: extend Kalman filter (EKF), lock detector, and FO cycle slip recovery. The EKF module estimates time-varying phase induced by both FO and laser phase noise. The lock detector module makes decision between acquisition mode and tracking mode and consequently sets the EKF tuning parameter in an adaptive manner. The third module can detect possible cycle slip in the case of large FO and make proper correction. Based on the simulation and experimental results, the proposed MAKF has shown excellent estimation performance featuring high accuracy, fast convergence, as well as the capability of cycle slip recovery.

  16. Photogrammetry and Laser Imagery Tests for Tank Waste Volume Estimates: Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Field, Jim G. [Washington River Protection Solutions, LLC, Richland, WA (United States)

    2013-03-27

    Feasibility tests were conducted using photogrammetry and laser technologies to estimate the volume of waste in a tank. These technologies were compared with video Camera/CAD Modeling System (CCMS) estimates; the current method used for post-retrieval waste volume estimates. This report summarizes test results and presents recommendations for further development and deployment of technologies to provide more accurate and faster waste volume estimates in support of tank retrieval and closure.

  17. Photogrammetry and Laser Imagery Tests for Tank Waste Volume Estimates: Summary Report

    International Nuclear Information System (INIS)

    Field, Jim G.

    2013-01-01

    Feasibility tests were conducted using photogrammetry and laser technologies to estimate the volume of waste in a tank. These technologies were compared with video Camera/CAD Modeling System (CCMS) estimates; the current method used for post-retrieval waste volume estimates. This report summarizes test results and presents recommendations for further development and deployment of technologies to provide more accurate and faster waste volume estimates in support of tank retrieval and closure

  18. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  19. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta

    Directory of Open Access Journals (Sweden)

    Rahmann Sven

    2004-06-01

    Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.

  20. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    Science.gov (United States)

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  1. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...

  2. Parameter estimation and testing of hypotheses

    International Nuclear Information System (INIS)

    Fruhwirth, R.

    1996-01-01

    This lecture presents the basic mathematical ideas underlying the concept of random variable and the construction and analysis of estimators and test statistics. The material presented is based mainly on four books given in the references: the general exposition of estimators and test statistics follows Kendall and Stuart which is a comprehensive review of the field; the book by Eadie et al. contains selecting topics of particular interest to experimental physicist and a host of illuminating examples from experimental high-energy physics; for the presentation of numerical procedures, the Press et al. and the Thisted books have been used. The last section deals with estimation in dynamic systems. In most books the Kalman filter is presented in a Bayesian framework, often obscured by cumbrous notation. In this lecture, the link to classical least-squares estimators and regression models is stressed with the aim of facilitating the access to this less familiar topic. References are given for specific applications to track and vertex fitting and for extended exposition of these topics. In the appendix, the link between Bayesian decision rules and feed-forward neural networks is presented. (J.S.). 10 refs., 5 figs., 1 appendix

  3. How to efficiently obtain accurate estimates of flower visitation rates by pollinators

    NARCIS (Netherlands)

    Fijen, Thijs P.M.; Kleijn, David

    2017-01-01

    Regional declines in insect pollinators have raised concerns about crop pollination. Many pollinator studies use visitation rate (pollinators/time) as a proxy for the quality of crop pollination. Visitation rate estimates are based on observation durations that vary significantly between studies.

  4. Summary on Bayes estimation and hypothesis testing

    Directory of Open Access Journals (Sweden)

    D. J. de Waal

    1988-03-01

    Full Text Available Although Bayes’ theorem was published in 1764, it is only recently that Bayesian procedures were used in practice in statistical analyses. Many developments have taken place and are still taking place in the areas of decision theory and group decision making. Two aspects, namely that of estimation and tests of hypotheses, will be looked into. This is the area of statistical inference mainly concerned with Mathematical Statistics.

  5. Accurate estimation of influenza epidemics using Google search data via ARGO.

    Science.gov (United States)

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  6. A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates

    CERN Document Server

    Hagedorn, G A

    2004-01-01

    We consider a simple molecular--type quantum system in which the nuclei have one degree of freedom and the electrons have two levels. The Hamiltonian has the form \\[ H(\\epsilon)\\ =\\ -\\,\\frac{\\epsilon^4}2\\, \\frac{\\partial^2\\phantom{i}}{\\partial y^2}\\ +\\ h(y), \\] where $h(y)$ is a $2\\times 2$ real symmetric matrix. Near a local minimum of an electron level ${\\cal E}(y)$ that is not at a level crossing, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer parameter $\\epsilon$ by optimal truncation of the Rayleigh--Schr\\"odinger series. That is, we construct $E_\\epsilon$ and $\\Psi_\\epsilon$, such that $\\|\\Psi_\\epsilon\\|\\,=\\,O(1)$ and \\[ \\|\\,(H(\\epsilon)\\,-\\,E_\\epsilon))\\,\\Psi_\\epsilon\\,\\|\\ 0. \\

  7. Estimating Aquifer Properties Using Sinusoidal Pumping Tests

    Science.gov (United States)

    Rasmussen, T. C.; Haborak, K. G.; Young, M. H.

    2001-12-01

    We develop the theoretical and applied framework for using sinusoidal pumping tests to estimate aquifer properties for confined, leaky, and partially penetrating conditions. The framework 1) derives analytical solutions for three boundary conditions suitable for many practical applications, 2) validates the analytical solutions against a finite element model, 3) establishes a protocol for conducting sinusoidal pumping tests, and 4) estimates aquifer hydraulic parameters based on the analytical solutions. The analytical solutions to sinusoidal stimuli in radial coordinates are derived for boundary value problems that are analogous to the Theis (1935) confined aquifer solution, the Hantush and Jacob (1955) leaky aquifer solution, and the Hantush (1964) partially penetrated confined aquifer solution. The analytical solutions compare favorably to a finite-element solution of a simulated flow domain, except in the region immediately adjacent to the pumping well where the implicit assumption of zero borehole radius is violated. The procedure is demonstrated in one unconfined and two confined aquifer units near the General Separations Area at the Savannah River Site, a federal nuclear facility located in South Carolina. Aquifer hydraulic parameters estimated using this framework provide independent confirmation of parameters obtained from conventional aquifer tests. The sinusoidal approach also resulted in the elimination of investigation-derived wastes.

  8. Accurate estimation of dose distributions inside an eye irradiated with {sup 106}Ru plaques

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, L.; Sauerwein, W. [Universitaetsklinikum Essen (Germany). NCTeam, Strahlenklinik; Sempau, J.; Zaragoza, F.J. [Universitat Politecnica de Catalunya, Barcelona (Spain). Inst. de Tecniques Energetiques; Wittig, A. [Marburg Univ. (Germany). Klinik fuer Strahlentherapie und Radioonkologie

    2013-01-15

    Background: Irradiation of intraocular tumors requires dedicated techniques, such as brachytherapy with {sup 106}Ru plaques. The currently available treatment planning system relies on the assumption that the eye is a homogeneous water sphere and on simplified radiation transport physics. However, accurate dose distributions and their assessment demand better models for both the eye and the physics. Methods: The Monte Carlo code PENELOPE, conveniently adapted to simulate the beta decay of {sup 106}Ru over {sup 106}Rh into {sup 106}Pd, was used to simulate radiation transport based on a computerized tomography scan of a patient's eye. A detailed geometrical description of two plaques (models CCA and CCB) from the manufacturer BEBIG was embedded in the computerized tomography scan. Results: The simulations were firstly validated by comparison with experimental results in a water phantom. Dose maps were computed for three plaque locations on the eyeball. From these maps, isodose curves and cumulative dose-volume histograms in the eye and for the structures at risk were assessed. For example, it was observed that a 4-mm anterior displacement with respect to a posterior placement of a CCA plaque for treating a posterior tumor would reduce from 40 to 0% the volume of the optic disc receiving more than 80 Gy. Such a small difference in anatomical position leads to a change in the dose that is crucial for side effects, especially with respect to visual acuity. The radiation oncologist has to bring these large changes in absorbed dose in the structures at risk to the attention of the surgeon, especially when the plaque has to be positioned close to relevant tissues. Conclusion: The detailed geometry of an eye plaque in computerized and segmented tomography of a realistic patient phantom was simulated accurately. Dose-volume histograms for relevant anatomical structures of the eye and the orbit were obtained with unprecedented accuracy. This represents an important step

  9. CUFID-query: accurate network querying through random walk based network flow estimation.

    Science.gov (United States)

    Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun

    2017-12-28

    Functional modules in biological networks consist of numerous biomolecules and their complicated interactions. Recent studies have shown that biomolecules in a functional module tend to have similar interaction patterns and that such modules are often conserved across biological networks of different species. As a result, such conserved functional modules can be identified through comparative analysis of biological networks. In this work, we propose a novel network querying algorithm based on the CUFID (Comparative network analysis Using the steady-state network Flow to IDentify orthologous proteins) framework combined with an efficient seed-and-extension approach. The proposed algorithm, CUFID-query, can accurately detect conserved functional modules as small subnetworks in the target network that are expected to perform similar functions to the given query functional module. The CUFID framework was recently developed for probabilistic pairwise global comparison of biological networks, and it has been applied to pairwise global network alignment, where the framework was shown to yield accurate network alignment results. In the proposed CUFID-query algorithm, we adopt the CUFID framework and extend it for local network alignment, specifically to solve network querying problems. First, in the seed selection phase, the proposed method utilizes the CUFID framework to compare the query and the target networks and to predict the probabilistic node-to-node correspondence between the networks. Next, the algorithm selects and greedily extends the seed in the target network by iteratively adding nodes that have frequent interactions with other nodes in the seed network, in a way that the conductance of the extended network is maximally reduced. Finally, CUFID-query removes irrelevant nodes from the querying results based on the personalized PageRank vector for the induced network that includes the fully extended network and its neighboring nodes. Through extensive

  10. Accurate estimation of dose distributions inside an eye irradiated with 106Ru plaques

    International Nuclear Information System (INIS)

    Brualla, L.; Sauerwein, W.; Sempau, J.; Zaragoza, F.J.; Wittig, A.

    2013-01-01

    Background: Irradiation of intraocular tumors requires dedicated techniques, such as brachytherapy with 106 Ru plaques. The currently available treatment planning system relies on the assumption that the eye is a homogeneous water sphere and on simplified radiation transport physics. However, accurate dose distributions and their assessment demand better models for both the eye and the physics. Methods: The Monte Carlo code PENELOPE, conveniently adapted to simulate the beta decay of 106 Ru over 106 Rh into 106 Pd, was used to simulate radiation transport based on a computerized tomography scan of a patient's eye. A detailed geometrical description of two plaques (models CCA and CCB) from the manufacturer BEBIG was embedded in the computerized tomography scan. Results: The simulations were firstly validated by comparison with experimental results in a water phantom. Dose maps were computed for three plaque locations on the eyeball. From these maps, isodose curves and cumulative dose-volume histograms in the eye and for the structures at risk were assessed. For example, it was observed that a 4-mm anterior displacement with respect to a posterior placement of a CCA plaque for treating a posterior tumor would reduce from 40 to 0% the volume of the optic disc receiving more than 80 Gy. Such a small difference in anatomical position leads to a change in the dose that is crucial for side effects, especially with respect to visual acuity. The radiation oncologist has to bring these large changes in absorbed dose in the structures at risk to the attention of the surgeon, especially when the plaque has to be positioned close to relevant tissues. Conclusion: The detailed geometry of an eye plaque in computerized and segmented tomography of a realistic patient phantom was simulated accurately. Dose-volume histograms for relevant anatomical structures of the eye and the orbit were obtained with unprecedented accuracy. This represents an important step toward an optimized

  11. Optimization of Photospheric Electric Field Estimates for Accurate Retrieval of Total Magnetic Energy Injection

    Science.gov (United States)

    Lumme, E.; Pomoell, J.; Kilpua, E. K. J.

    2017-12-01

    Estimates of the photospheric magnetic, electric, and plasma velocity fields are essential for studying the dynamics of the solar atmosphere, for example through the derivative quantities of Poynting and relative helicity flux and using the fields to obtain the lower boundary condition for data-driven coronal simulations. In this paper we study the performance of a data processing and electric field inversion approach that requires only high-resolution and high-cadence line-of-sight or vector magnetograms, which we obtain from the Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO). The approach does not require any photospheric velocity estimates, and the lacking velocity information is compensated for using ad hoc assumptions. We show that the free parameters of these assumptions can be optimized to reproduce the time evolution of the total magnetic energy injection through the photosphere in NOAA AR 11158, when compared to recent state-of-the-art estimates for this active region. However, we find that the relative magnetic helicity injection is reproduced poorly, reaching at best a modest underestimation. We also discuss the effect of some of the data processing details on the results, including the masking of the noise-dominated pixels and the tracking method of the active region, neither of which has received much attention in the literature so far. In most cases the effect of these details is small, but when the optimization of the free parameters of the ad hoc assumptions is considered, a consistent use of the noise mask is required. The results found in this paper imply that the data processing and electric field inversion approach that uses only the photospheric magnetic field information offers a flexible and straightforward way to obtain photospheric magnetic and electric field estimates suitable for practical applications such as coronal modeling studies.

  12. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  13. Accurate estimate of the relic density and the kinetic decoupling in nonthermal dark matter models

    International Nuclear Information System (INIS)

    Arcadi, Giorgio; Ullio, Piero

    2011-01-01

    Nonthermal dark matter generation is an appealing alternative to the standard paradigm of thermal WIMP dark matter. We reconsider nonthermal production mechanisms in a systematic way, and develop a numerical code for accurate computations of the dark matter relic density. We discuss, in particular, scenarios with long-lived massive states decaying into dark matter particles, appearing naturally in several beyond the standard model theories, such as supergravity and superstring frameworks. Since nonthermal production favors dark matter candidates with large pair annihilation rates, we analyze the possible connection with the anomalies detected in the lepton cosmic-ray flux by Pamela and Fermi. Concentrating on supersymmetric models, we consider the effect of these nonstandard cosmologies in selecting a preferred mass scale for the lightest supersymmetric particle as a dark matter candidate, and the consequent impact on the interpretation of new physics discovered or excluded at the LHC. Finally, we examine a rather predictive model, the G2-MSSM, investigating some of the standard assumptions usually implemented in the solution of the Boltzmann equation for the dark matter component, including coannihilations. We question the hypothesis that kinetic equilibrium holds along the whole phase of dark matter generation, and the validity of the factorization usually implemented to rewrite the system of a coupled Boltzmann equation for each coannihilating species as a single equation for the sum of all the number densities. As a byproduct we develop here a formalism to compute the kinetic decoupling temperature in case of coannihilating particles, which can also be applied to other particle physics frameworks, and also to standard thermal relics within a standard cosmology.

  14. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation

    DEFF Research Database (Denmark)

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty

    2018-01-01

    OBJECTIVE: Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-re...

  15. Is 10-second electrocardiogram recording enough for accurately estimating heart rate in atrial fibrillation.

    Science.gov (United States)

    Shuai, Wei; Wang, Xi-Xing; Hong, Kui; Peng, Qiang; Li, Ju-Xiang; Li, Ping; Chen, Jing; Cheng, Xiao-Shu; Su, Hai

    2016-07-15

    At present, the estimation of rest heart rate (HR) in atrial fibrillation (AF) is obtained by apical auscultation for 1min or on the surface electrocardiogram (ECG) by multiplying the number of RR intervals on the 10second recording by six. But the reasonability of 10second ECG recording is controversial. ECG was continuously recorded at rest for 60s to calculate the real rest HR (HR60s). Meanwhile, the first 10s and 30s ECG recordings were used for calculating HR10s (sixfold) and HR30s (twofold). The differences of HR10s or HR30s with the HR60s were compared. The patients were divided into three sub-groups on the HR60s 100bpm. No significant difference among the mean HR10s, HR30s and HR60s was found. A positive correlation existed between HR10s and HR60s or HR30s and HR60s. Bland-Altman plot showed that the 95% reference limits were high as -11.0 to 16.0bpm for HR10s, but for HR30s these values were only -4.5 to 5.2bpm. Among the three subgroups with HR60s 100bpm, the 95% reference limits with HR60s were -8.9 to 10.6, -10.5 to 14.0 and -11.3 to 21.7bpm for HR10s, but these values were -3.9 to 4.3, -4.1 to 4.6 and -5.3 to 6.7bpm for HR30s. As 10s ECG recording could not provide clinically accepted estimation HR, ECG should be recorded at least for 30s in the patients with AF. It is better to record ECG for 60s when the HR is rapid. Copyright © 2016. Published by Elsevier Ireland Ltd.

  16. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    Science.gov (United States)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  17. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    Science.gov (United States)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  18. Systematic Review of Health Economic Evaluations of Diagnostic Tests in Brazil: How accurate are the results?

    Science.gov (United States)

    Oliveira, Maria Regina Fernandes; Leandro, Roseli; Decimoni, Tassia Cristina; Rozman, Luciana Martins; Novaes, Hillegonda Maria Dutilh; De Soárez, Patrícia Coelho

    2017-08-01

    The aim of this study is to identify and characterize the health economic evaluations (HEEs) of diagnostic tests conducted in Brazil, in terms of their adherence to international guidelines for reporting economic studies and specific questions in test accuracy reports. We systematically searched multiple databases, selecting partial and full HEEs of diagnostic tests, published between 1980 and 2013. Two independent reviewers screened articles for relevance and extracted the data. We performed a qualitative narrative synthesis. Forty-three articles were reviewed. The most frequently studied diagnostic tests were laboratory tests (37.2%) and imaging tests (32.6%). Most were non-invasive tests (51.2%) and were performed in the adult population (48.8%). The intended purposes of the technologies evaluated were mostly diagnostic (69.8%), but diagnosis and treatment and screening, diagnosis, and treatment accounted for 25.6% and 4.7%, respectively. Of the reviewed studies, 12.5% described the methods used to estimate the quantities of resources, 33.3% reported the discount rate applied, and 29.2% listed the type of sensitivity analysis performed. Among the 12 cost-effectiveness analyses, only two studies (17%) referred to the application of formal methods to check the quality of the accuracy studies that provided support for the economic model. The existing Brazilian literature on the HEEs of diagnostic tests exhibited reasonably good performance. However, the following points still require improvement: 1) the methods used to estimate resource quantities and unit costs, 2) the discount rate, 3) descriptions of sensitivity analysis methods, 4) reporting of conflicts of interest, 5) evaluations of the quality of the accuracy studies considered in the cost-effectiveness models, and 6) the incorporation of accuracy measures into sensitivity analyses.

  19. An automated A-value measurement tool for accurate cochlear duct length estimation.

    Science.gov (United States)

    Iyaniwura, John E; Elfarnawany, Mai; Ladak, Hanif M; Agrawal, Sumit K

    2018-01-22

    There has been renewed interest in the cochlear duct length (CDL) for preoperative cochlear implant electrode selection and postoperative generation of patient-specific frequency maps. The CDL can be estimated by measuring the A-value, which is defined as the length between the round window and the furthest point on the basal turn. Unfortunately, there is significant intra- and inter-observer variability when these measurements are made clinically. The objective of this study was to develop an automated A-value measurement algorithm to improve accuracy and eliminate observer variability. Clinical and micro-CT images of 20 cadaveric cochleae specimens were acquired. The micro-CT of one sample was chosen as the atlas, and A-value fiducials were placed onto that image. Image registration (rigid affine and non-rigid B-spline) was applied between the atlas and the 19 remaining clinical CT images. The registration transform was applied to the A-value fiducials, and the A-value was then automatically calculated for each specimen. High resolution micro-CT images of the same 19 specimens were used to measure the gold standard A-values for comparison against the manual and automated methods. The registration algorithm had excellent qualitative overlap between the atlas and target images. The automated method eliminated the observer variability and the systematic underestimation by experts. Manual measurement of the A-value on clinical CT had a mean error of 9.5 ± 4.3% compared to micro-CT, and this improved to an error of 2.7 ± 2.1% using the automated algorithm. Both the automated and manual methods correlated significantly with the gold standard micro-CT A-values (r = 0.70, p value measurement tool using atlas-based registration methods was successfully developed and validated. The automated method eliminated the observer variability and improved accuracy as compared to manual measurements by experts. This open-source tool has the potential to benefit

  20. Testing of a novel pin array guide for accurate three-dimensional glenoid component positioning.

    Science.gov (United States)

    Lewis, Gregory S; Stevens, Nicole M; Armstrong, April D

    2015-12-01

    A substantial challenge in total shoulder replacement is accurate positioning and alignment of the glenoid component. This challenge arises from limited intraoperative exposure and complex arthritic-driven deformity. We describe a novel pin array guide and method for patient-specific guiding of the glenoid central drill hole. We also experimentally tested the hypothesis that this method would reduce errors in version and inclination compared with 2 traditional methods. Polymer models of glenoids were created from computed tomography scans from 9 arthritic patients. Each 3-dimensional (3D) printed scapula was shrouded to simulate the operative situation. Three different methods for central drill alignment were tested, all with the target orientation of 5° retroversion and 0° inclination: no assistance, assistance by preoperative 3D imaging, and assistance by the pin array guide. Version and inclination errors of the drill line were compared. Version errors using the pin array guide (3° ± 2°) were significantly lower than version errors associated with no assistance (9° ± 7°) and preoperative 3D imaging (8° ± 6°). Inclination errors were also significantly lower using the pin array guide compared with no assistance. The new pin array guide substantially reduced errors in orientation of the central drill line. The guide method is patient specific but does not require rapid prototyping and instead uses adjustments to an array of pins based on automated software calculations. This method may ultimately provide a cost-effective solution enabling surgeons to obtain accurate orientation of the glenoid. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  1. The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.

    Science.gov (United States)

    Kaskowitz, Gary S.; De Ayala, R. J.

    2001-01-01

    Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…

  2. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    Science.gov (United States)

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  3. Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.

  4. An accurately controllable imitative stress corrosion cracking for electromagnetic nondestructive testing and evaluations

    International Nuclear Information System (INIS)

    Yusa, Noritaka; Uchimoto, Tetsuya; Takagi, Toshiyuki; Hashizume, Hidetoshi

    2012-01-01

    Highlights: ► We propose a method to simulate stress corrosion cracking. ► The method offers nondestructive signals similar to those of actual cracking. ► Visual and eddy current examinations validate the method. - Abstract: This study proposes a simple and cost-effective approach to fabricate an artificial flaw that is identical to stress corrosion cracking especially from the viewpoint of electromagnetic nondestructive evaluations. The key idea of the approach is to embed a partially-bonded region inside a material by bonding together surfaces that have grooves. The region is regarded as an area of uniform non-zero conductivity from an electromagnetic nondestructive point of view, and thus simulates the characteristics of stress corrosion cracking. Since the grooves are introduced using electro-discharge machining, one can control the profile of the imitative stress corrosion cracking accurately. After numerical simulation to evaluate the spatial resolution of conventional eddy current testing, six specimens made of type 316L austenitic stainless steel were fabricated on the basis of the results of the simulations. Visual and eddy current examinations were carried out to demonstrate that the artificial flaws well simulated the characteristics of actual stress corrosion cracking. Subsequent destructive test confirmed that the bonding did not change the depth profiles of the artificial flaw.

  5. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    Science.gov (United States)

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  6. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Directory of Open Access Journals (Sweden)

    Abel B Minyoo

    2015-12-01

    Full Text Available In this study we show that incentives (dog collars and owner wristbands are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey. Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  7. An Accurate Computational Tool for Performance Estimation of FSO Communication Links over Weak to Strong Atmospheric Turbulent Channels

    Directory of Open Access Journals (Sweden)

    Theodore D. Katsilieris

    2017-03-01

    Full Text Available The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using

  8. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  9. An accurate and efficient identification of children with psychosocial problems by means of computerized adaptive testing

    Directory of Open Access Journals (Sweden)

    Reijneveld Symen A

    2011-08-01

    Full Text Available Abstract Background Questionnaires used by health services to identify children with psychosocial problems are often rather short. The psychometric properties of such short questionnaires are mostly less than needed for an accurate distinction between children with and without problems. We aimed to assess whether a short Computerized Adaptive Test (CAT can overcome the weaknesses of short written questionnaires when identifying children with psychosocial problems. Method We used a Dutch national data set obtained from parents of children invited for a routine health examination by Preventive Child Healthcare with 205 items on behavioral and emotional problems (n = 2,041, response 84%. In a random subsample we determined which items met the requirements of an Item Response Theory (IRT model to a sufficient degree. Using those items, item parameters necessary for a CAT were calculated and a cut-off point was defined. In the remaining subsample we determined the validity and efficiency of a Computerized Adaptive Test using simulation techniques, with current treatment status and a clinical score on the Total Problem Scale (TPS of the Child Behavior Checklist as criteria. Results Out of 205 items available 190 sufficiently met the criteria of the underlying IRT model. For 90% of the children a score above or below cut-off point could be determined with 95% accuracy. The mean number of items needed to achieve this was 12. Sensitivity and specificity with the TPS as a criterion were 0.89 and 0.91, respectively. Conclusion An IRT-based CAT is a very promising option for the identification of psychosocial problems in children, as it can lead to an efficient, yet high-quality identification. The results of our simulation study need to be replicated in a real-life administration of this CAT.

  10. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    Science.gov (United States)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  11. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  12. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus?

    Directory of Open Access Journals (Sweden)

    Corina J. Logan

    2015-06-01

    Full Text Available There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size, as measured by computerized tomography (CT scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex.

  13. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    Science.gov (United States)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  14. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    Directory of Open Access Journals (Sweden)

    Andreas Tuerk

    2017-05-01

    Full Text Available Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare", a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  15. Item selection and ability estimation adaptive testing

    NARCIS (Netherlands)

    Pashley, Peter J.; van der Linden, Wim J.; van der Linden, Willem J.; Glas, Cornelis A.W.; Glas, Cees A.W.

    2010-01-01

    The last century saw a tremendous progression in the refinement and use of standardized linear tests. The first administered College Board exam occurred in 1901 and the first Scholastic Assessment Test (SAT) was given in 1926. Since then, progressively more sophisticated standardized linear tests

  16. ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models.

    Science.gov (United States)

    Maghrabi, Ali H A; McGuffin, Liam J

    2017-07-03

    Methods that reliably estimate the likely similarity between the predicted and native structures of proteins have become essential for driving the acceptance and adoption of three-dimensional protein models by life scientists. ModFOLD6 is the latest version of our leading resource for Estimates of Model Accuracy (EMA), which uses a pioneering hybrid quasi-single model approach. The ModFOLD6 server integrates scores from three pure-single model methods and three quasi-single model methods using a neural network to estimate local quality scores. Additionally, the server provides three options for producing global score estimates, depending on the requirements of the user: (i) ModFOLD6_rank, which is optimized for ranking/selection, (ii) ModFOLD6_cor, which is optimized for correlations of predicted and observed scores and (iii) ModFOLD6 global for balanced performance. The ModFOLD6 methods rank among the top few for EMA, according to independent blind testing by the CASP12 assessors. The ModFOLD6 server is also continuously automatically evaluated as part of the CAMEO project, where significant performance gains have been observed compared to our previous server and other publicly available servers. The ModFOLD6 server is freely available at: http://www.reading.ac.uk/bioinf/ModFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. An accurate and efficient identification of children with psychosocial problems by means of computerized adaptive testing

    NARCIS (Netherlands)

    Vogels, Antonius G. C.; Jacobusse, Gert W.; Reijneveld, Symen A.

    2011-01-01

    Background: Questionnaires used by health services to identify children with psychosocial problems are often rather short. The psychometric properties of such short questionnaires are mostly less than needed for an accurate distinction between children with and without problems. We aimed to assess

  18. TEST (Toxicity Estimation Software Tool) Ver 4.1

    Science.gov (United States)

    The Toxicity Estimation Software Tool (T.E.S.T.) has been developed to allow users to easily estimate toxicity and physical properties using a variety of QSAR methodologies. T.E.S.T allows a user to estimate toxicity without requiring any external programs. Users can input a chem...

  19. A deep learning approach to estimate stress distribution: a fast and accurate surrogate of finite-element analysis.

    Science.gov (United States)

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-01-01

    Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).

  20. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    Science.gov (United States)

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  1. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation

    International Nuclear Information System (INIS)

    Subramanian, Swetha; Mast, T Douglas

    2015-01-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. (note)

  2. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    Science.gov (United States)

    Subramanian, Swetha; Mast, T Douglas

    2015-10-07

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  3. Quick test for durability factor estimation.

    Science.gov (United States)

    2010-03-01

    The Missouri Department of Transportation (MoDOT) is considering the use of the AASHTO T 161 Durability Factor (DF) as an endresult : performance specification criterion for evaluation of paving concrete. However, the test method duration can exceed ...

  4. Semi-Nonparametric Estimation and Misspecification Testing of Diffusion Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis

    of the estimators and tests under the null are derived, and the power properties are analyzed by considering contiguous alternatives. Test directly comparing the drift and diffusion estimators under the relevant null and alternative are also analyzed. Markov Bootstrap versions of the test statistics are proposed...... to improve on the finite-sample approximations. The finite sample properties of the estimators are examined in a simulation study....

  5. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  6. How accurate are adolescents in portion-size estimation using the computer tool young adolescents' nutrition assessment on computer (YANA-C)?

    OpenAIRE

    Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea

    2010-01-01

    Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amou...

  7. Cystatin C-Based Equation Does Not Accurately Estimate the Glomerular Filtration in Japanese Living Kidney Donors.

    Science.gov (United States)

    Tsujimura, Kazuma; Ota, Morihito; Chinen, Kiyoshi; Adachi, Takayuki; Nagayama, Kiyomitsu; Oroku, Masato; Nishihira, Morikuni; Shiohira, Yoshiki; Iseki, Kunitoshi; Ishida, Hideki; Tanabe, Kazunari

    2017-06-23

    BACKGROUND Precise evaluation of a living donor's renal function is necessary to ensure adequate residual kidney function after donor nephrectomy. Our aim was to evaluate the feasibility of estimating glomerular filtration rate (GFR) using serum cystatin-C prior to kidney transplantation. MATERIAL AND METHODS Using the equations of the Japanese Society of Nephrology, we calculated the GFR using serum creatinine (eGFRcre) and cystatin C levels (eGFRcys) for 83 living kidney donors evaluated between March 2010 and March 2016. We compared eGFRcys and eGFRcre values against the creatinine clearance rate (CCr). RESULTS The study population included 27 males and 56 females. The mean eGFRcys, eGFRcre, and CCr were, 91.4±16.3 mL/min/1.73 m² (range, 59.9-128.9 mL/min/1.73 m²), 81.5±14.2 mL/min/1.73 m² (range, 55.4-117.5 mL/min/1.73 m²) and 108.4±21.6 mL/min/1.73 m² (range, 63.7-168.7 mL/min/1.73 m²), respectively. eGFRcys was significantly lower than CCr (p<0.001). The correlation coefficient between eGFRcys and CCr values was 0.466, and the mean difference between the two values was -17.0 (15.7%), with a root mean square error of 19.2. Thus, eGFRcre was significantly lower than CCr (p<0.001). The correlation coefficient between eGFRcre and CCr values was 0.445, and the mean difference between the two values was -26.9 (24.8%), with a root mean square error of 19.5. CONCLUSIONS Although eGFRcys provided a better estimation of GFR than eGFRcre, eGFRcys still did not provide an accurate measure of kidney function in Japanese living kidney donors.

  8. CRC Test Ever - Small Area Estimates

    Science.gov (United States)

    For the ever had colorectal cancer test, a person 50 years of age or older must have reported having at least one colorectal endoscopy (sigmoidoscopy or colonoscopy) in his/her life or at least one home-based FOBT within the past two years by the time of interview.

  9. A portable generic elementary function package in Ada and an accurate test suite

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Ping Tak Peter.

    1990-11-01

    A comprehensive set of elementary functions has been implemented portably in Ada. The high accuracy of the implementation has been confirmed by rigorous analysis. Moreover, we present new test methods that are efficient and offer a high resolution of 0.005 unit in the last place. These test methods have been implemented portably here and confirm the accuracy of our implemented functions. Reports on the accuracy of other function libraries obtained by our test programs are also presented. 26 refs., 9 tabs.

  10. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  11. Supervised oral HIV self-testing is accurate in rural KwaZulu-Natal, South Africa.

    Science.gov (United States)

    Martínez Pérez, Guillermo; Steele, Sarah J; Govender, Indira; Arellano, Gemma; Mkwamba, Alec; Hadebe, Menzi; van Cutsem, Gilles

    2016-06-01

    To achieve UNAIDS 90-90-90 targets, alternatives to conventional HIV testing models are necessary in South Africa to increase population awareness of their HIV status. One of the alternatives is oral mucosal transudates-based HIV self-testing (OralST). This study describes implementation of counsellor-introduced supervised OralST in a high HIV prevalent rural area. Cross-sectional study conducted in two government-run primary healthcare clinics and three Médecins Sans Frontières-run fixed-testing sites in uMlalazi municipality, KwaZulu-Natal. Lay counsellors sampled and recruited eligible participants, sought informed consent and demonstrated the use of the OraQuick(™) OralST. The participants used the OraQuick(™) in front of the counsellor and underwent a blood-based Determine(™) and a Unigold(™) rapid diagnostic test as gold standard for comparison. Primary outcomes were user error rates, inter-rater agreement, sensitivity, specificity and predictive values. A total of 2198 participants used the OraQuick(™) , of which 1005 were recruited at the primary healthcare clinics. Of the total, 1457 (66.3%) were women. Only two participants had to repeat their OraQuick(™) . Inter-rater agreement was 99.8% (Kappa 0.9925). Sensitivity for the OralST was 98.7% (95% CI 96.8-99.6), and specificity was 100% (95% CI 99.8-100). This study demonstrates high inter-rater agreement, and high accuracy of supervised OralST. OralST has the potential to increase uptake of HIV testing and could be offered at clinics and community testing sites in rural South Africa. Further research is necessary on the potential of unsupervised OralST to increase HIV status awareness and linkage to care. © 2016 John Wiley & Sons Ltd.

  12. Paradoxical Effects of Testing: Retrieval Enhances Both Accurate Recall and Suggestibility in Eyewitnesses

    Science.gov (United States)

    Chan, Jason C. K.; Langley, Moses M.

    2011-01-01

    Although retrieval practice typically enhances memory retention, it can also impair subsequent eyewitness memory accuracy (Chan, Thomas, & Bulevich, 2009). Specifically, participants who had taken an initial test about a witnessed event were more likely than nontested participants to recall subsequently encountered misinformation--an effect we…

  13. METRIC TESTS CHARACTERISTIC FOR ESTIMATING JUMPING FOR VOLLEYBALL PLAYERS

    Directory of Open Access Journals (Sweden)

    Toplica Stojanović

    2008-08-01

    Full Text Available With goal to establish metric tests characteristics for estimating jumping for volleyball players, it was organized a pilot research on pattern of 23 volleyball players from cadet team and 23 students from high-school. For needs of this research four tests are valid for estimation, jump in block with left and right leg and jump in spike with left and right leg. Each test has been taken three times, so that we could with test-re test method determine their reliability, and with factor analysis their validity. Data were processed by multivariate analysis (item analysis, factor analysis from statistical package „Statistica 6.0 for windows“. On the results of research and discussion we can say that the tests had high coefficient of reliability, as well as factor validity, and these tests can be used to estimate jumping for volleyball players.

  14. A novel device for accurate and efficient testing for vision-threatening diabetic retinopathy.

    Science.gov (United States)

    Maa, April Y; Feuer, William J; Davis, C Quentin; Pillow, Ensa K; Brown, Tara D; Caywood, Rachel M; Chasan, Joel E; Fransen, Stephen R

    2016-04-01

    To evaluate the performance of the RETeval device, a handheld instrument using flicker electroretinography (ERG) and pupillography on undilated subjects with diabetes, to detect vision-threatening diabetic retinopathy (VTDR). Performance was measured using a cross-sectional, single armed, non-interventional, multi-site study with Early Treatment Diabetic Retinopathy Study 7-standard field, stereo, color fundus photography as the gold standard. The 468 subjects were randomized to a calibration phase (80%), whose ERG and pupillary waveforms were used to formulate an equation correlating with the presence of VTDR, and a validation phase (20%), used to independently validate that equation. The primary outcome was the prevalence-corrected area under the receiver operating characteristic (ROC) curve for the detection of VTDR. The area under the ROC curve was 0.86 for VTDR. With a sensitivity of 83%, the specificity was 78% and the negative predictive value was 99%. The average testing time was 2.3 min. With a VTDR prevalence similar to that in the U.S., the RETeval device will identify about 75% of the population as not having VTDR with 99% accuracy. The device is simple to use, does not require pupil dilation, and has a short testing time. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Bacterial Cytological Profiling (BCP as a Rapid and Accurate Antimicrobial Susceptibility Testing Method for Staphylococcus aureus

    Directory of Open Access Journals (Sweden)

    D.T. Quach

    2016-02-01

    Full Text Available Successful treatment of bacterial infections requires the timely administration of appropriate antimicrobial therapy. The failure to initiate the correct therapy in a timely fashion results in poor clinical outcomes, longer hospital stays, and higher medical costs. Current approaches to antibiotic susceptibility testing of cultured pathogens have key limitations ranging from long run times to dependence on prior knowledge of genetic mechanisms of resistance. We have developed a rapid antimicrobial susceptibility assay for Staphylococcus aureus based on bacterial cytological profiling (BCP, which uses quantitative fluorescence microscopy to measure antibiotic induced changes in cellular architecture. BCP discriminated between methicillin-susceptible (MSSA and -resistant (MRSA clinical isolates of S. aureus (n = 71 within 1–2 h with 100% accuracy. Similarly, BCP correctly distinguished daptomycin susceptible (DS from daptomycin non-susceptible (DNS S. aureus strains (n = 20 within 30 min. Among MRSA isolates, BCP further identified two classes of strains that differ in their susceptibility to specific combinations of beta-lactam antibiotics. BCP provides a rapid and flexible alternative to gene-based susceptibility testing methods for S. aureus, and should be readily adaptable to different antibiotics and bacterial species as new mechanisms of resistance or multidrug-resistant pathogens evolve and appear in mainstream clinical practice.

  16. Rotating Arc Jet Test Model: Time-Accurate Trajectory Heat Flux Replication in a Ground Test Environment

    Science.gov (United States)

    Laub, Bernard; Grinstead, Jay; Dyakonov, Artem; Venkatapathy, Ethiraj

    2011-01-01

    Though arc jet testing has been the proven method employed for development testing and certification of TPS and TPS instrumentation, the operational aspects of arc jets limit testing to selected, but constant, conditions. Flight, on the other hand, produces timevarying entry conditions in which the heat flux increases, peaks, and recedes as a vehicle descends through an atmosphere. As a result, we are unable to "test as we fly." Attempts to replicate the time-dependent aerothermal environment of atmospheric entry by varying the arc jet facility operating conditions during a test have proven to be difficult, expensive, and only partially successful. A promising alternative is to rotate the test model exposed to a constant-condition arc jet flow to yield a time-varying test condition at a point on a test article (Fig. 1). The model shape and rotation rate can be engineered so that the heat flux at a point on the model replicates the predicted profile for a particular point on a flight vehicle. This simple concept will enable, for example, calibration of the TPS sensors on the Mars Science Laboratory (MSL) aeroshell for anticipated flight environments.

  17. Influences on and Limitations of Classical Test Theory Reliability Estimates.

    Science.gov (United States)

    Arnold, Margery E.

    It is incorrect to say "the test is reliable" because reliability is a function not only of the test itself, but of many factors. The present paper explains how different factors affect classical reliability estimates such as test-retest, interrater, internal consistency, and equivalent forms coefficients. Furthermore, the limits of classical test…

  18. Testing the hierarchical assembly of massive galaxies using accurate merger rates out to z ˜ 1.5

    Science.gov (United States)

    Rodrigues, Myriam; Puech, M.; Flores, H.; Hammer, F.; Pirzkal, N.

    2018-04-01

    We established an accurate comparison between observationally and theoretically estimated major merger rates over a large range of mass (log Mbar/M⊙ =9.9-11.4) and redshift (z = 0.7-1.6). For this, we combined a new estimate of the merger rate from an exhaustive count of pairs within the virial radius of massive galaxies at z ˜ 1.265 and cross-validated with their morphology, with estimates from the morpho-kinematic analysis of two other samples. Theoretical predictions were estimated using semi-empirical models with inputs matching the properties of the observed samples, while specific visibility time-scales scaled to the observed samples were used. Both theory and observations are found to agree within 30 per cent of the observed value, which provides strong support to the hierarchical assembly of galaxies over the probed ranges of mass and redshift. Here, we find that ˜60 per cent of population of local massive (Mstellar =1010.3-11.6 M⊙) galaxies would have undergone a wet major merger since z = 1.5, consistently with previous studies. Such recent mergers are expected to result in the (re-)formation of a significant fraction of local disc galaxies.

  19. The Stool DNA Test is More Accurate than the Plasma Septin 9 Test in Detecting Colorectal Neoplasia

    Science.gov (United States)

    Ahlquist, David A.; Taylor, William R.; Mahoney, Douglas W.; Zou, Hongzhi; Domanico, Michael; Thibodeau, Stephen N.; Boardman, Lisa A.; Berger, Barry M.; Lidgard, Graham P.

    2014-01-01

    Background & Aims Several noninvasive tests have been developed for colorectal cancer (CRC) screening. We compared the sensitivities of a multi-marker test for stool DNA (sDNA) and a plasma test for methylated Septin 9 (SEPT9) in identifying patients with large adenomas or CRC. Methods We analyzed paired stool and plasma samples from 30 patients with CRC and 22 with large adenomas from Mayo Clinic archives. Stool (n=46) and plasma (n=49) samples from age- and sex-matched patients with normal colonoscopy results were used as controls. The sDNA test is an assay for methylated BMP3, NDRG4, vimentin, and TFPI2; mutant KRAS; the β-actin gene, and quantity of hemoglobin (by the porphyrin method). It was performed blindly at Exact Sciences (Madison WI); the test for SEPT9 was performed at ARUP Laboratories (Salt Lake City UT). Results were considered positive based on the manufacturer's specificity cutoff values of 90% and 89%, respectively. Results The sDNA test detected adenomas (median 2 cm, range 1–5 cm) with 82% sensitivity (95% confidence interval [CI], 60%–95%); SEPT9 had 14% sensitivity (95% CI, 3%–35%; P=.0001). The sDNA test identified patients with CRC with 87% sensitivity (95% CI, 69%–96%); SEPT9 had 60% sensitivity (95% CI, 41%–77%; P=.046). The sDNA test identified patients with stage I–III CRC with 91% sensitivity (95% CI, 71%–99%); SEPT9 had 50% sensitivity (95% CI, 28%–72%; P=.013); for stage IV CRC, sensitivity values were 75% (95% CI, 35%–97%) and 88% (95% CI, 47%–100%), respectively (P=.56). False-positive rates were 7% for the sDNA test and 27% for SEPT9. Conclusions Based on analyses of paired samples, the sDNA test detects non-metastatic CRC and large adenomas with significantly greater levels of sensitivity than the SEPT9 test. These findings might be used to modify approaches for CRC prevention and early detection. PMID:22019796

  20. Estimating and Testing Mediation Effects with Censored Data

    Science.gov (United States)

    Wang, Lijuan; Zhang, Zhiyong

    2011-01-01

    This study investigated influences of censored data on mediation analysis. Mediation effect estimates can be biased and inefficient with censoring on any one of the input, mediation, and output variables. A Bayesian Tobit approach was introduced to estimate and test mediation effects with censored data. Simulation results showed that the Bayesian…

  1. Usng subjective percentiles and test data for estimating fragility functions

    International Nuclear Information System (INIS)

    George, L.L.; Mensing, R.W.

    1981-01-01

    Fragility functions are cumulative distribution functions (cdfs) of strengths at failure. They are needed for reliability analyses of systems such as power generation and transmission systems. Subjective opinions supplement sparse test data for estimating fragility functions. Often the opinions are opinions on the percentiles of the fragility function. Subjective percentiles are likely to be less biased than opinions on parameters of cdfs. Solutions to several problems in the estimation of fragility functions are found for subjective percentiles and test data. How subjective percentiles should be used to estimate subjective fragility functions, how subjective percentiles should be combined with test data, how fragility functions for several failure modes should be combined into a composite fragility function, and how inherent randomness and uncertainty due to lack of knowledge should be represented are considered. Subjective percentiles are treated as independent estimates of percentiles. The following are derived: least-squares parameter estimators for normal and lognormal cdfs, based on subjective percentiles (the method is applicable to any invertible cdf); a composite fragility function for combining several failure modes; estimators of variation within and between groups of experts for nonidentically distributed subjective percentiles; weighted least-squares estimators when subjective percentiles have higher variation at higher percents; and weighted least-squares and Bayes parameter estimators based on combining subjective percentiles and test data. 4 figures, 2 tables

  2. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  3. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  4. Hydrogen sulfide detection based on reflection: from a poison test approach of ancient China to single-cell accurate localization.

    Science.gov (United States)

    Kong, Hao; Ma, Zhuoran; Wang, Song; Gong, Xiaoyun; Zhang, Sichun; Zhang, Xinrong

    2014-08-05

    With the inspiration of an ancient Chinese poison test approach, we report a rapid hydrogen sulfide detection strategy in specific areas of live cells using silver needles with good spatial resolution of 2 × 2 μm(2). Besides the accurate-localization ability, this reflection-based strategy also has attractive merits of convenience and robust response when free pretreatment and short detection time are concerned. The success of endogenous H2S level evaluation in cellular cytoplasm and nuclear of human A549 cells promises the application potential of our strategy in scientific research and medical diagnosis.

  5. Nondestructive test for estimating strength of concrete in structure

    International Nuclear Information System (INIS)

    Nozaki, Yoshitsugu; Soshiroda, Tomozo

    1997-01-01

    Evaluation of the quality of concrete in structures, especially strength estimation is said to be one of the most important problem and needed to establish test method especial tv for non-destructive method in situ. The paper describes the nondestructive test to estimate strength of concrete. From experimental study using full scale model wall, strength estimating equations are proposed by ultra-sonic pulse velocity, rebound hardness of Schmidt hammer and combined with two methods. From statistical study of the results of experiments, errors of estimated strength by the proposed equations are suggested. The validity of the equations are verified by investigation for existing reinforced concrete buildings aged 20 - 50 years. And it was found from the statistical study that the strength estimating equations need to be corrected in applying to tons aged concrete, and correction factor to those squat ions were suggested. Furthermore the corrected proposed equations were verified by applying to buildings investigated the other case.

  6. Estimation of common cause failure parameters with periodic tests

    Energy Technology Data Exchange (ETDEWEB)

    Barros, Anne [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France)], E-mail: anne.barros@utt.fr; Grall, Antoine [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France); Vasseur, Dominique [Electricite de France, EDF R and D - Industrial Risk Management Department 1, av. du General de Gaulle- 92141 Clamart (France)

    2009-04-15

    In the specific case of safety systems, CCF parameters estimators for standby components depend on the periodic test schemes. Classically, the testing schemes are either staggered (alternation of tests on redundant components) or non-staggered (all components are tested at the same time). In reality, periodic tests schemes performed on safety components are more complex and combine staggered tests, when the plant is in operation, to non-staggered tests during maintenance and refueling outage periods of the installation. Moreover, the CCF parameters estimators described in the US literature are derived in a consistent way with US Technical Specifications constraints that do not apply on the French Nuclear Power Plants for staggered tests on standby components. Given these issues, the evaluation of CCF parameters from the operating feedback data available within EDF implies the development of methodologies that integrate the testing schemes specificities. This paper aims to formally propose a solution for the estimation of CCF parameters given two distinct difficulties respectively related to a mixed testing scheme and to the consistency with EDF's specific practices inducing systematic non-simultaneity of the observed failures in a staggered testing scheme.

  7. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    Science.gov (United States)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  8. Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests

    Science.gov (United States)

    Douglas, Freddie; Bourgeois, Edit Kaminsky

    2005-01-01

    The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).

  9. Junction temperature estimation for an advanced active power cycling test

    DEFF Research Database (Denmark)

    Choi, Uimin; Blaabjerg, Frede; Jørgensen, S.

    2015-01-01

    estimation method using on-state VCE for an advanced active power cycling test is proposed. The concept of the advanced power cycling test is explained first. Afterwards the junction temperature estimation method using on-state VCE and current is presented. Further, the method to improve the accuracy...... of the maximum junction temperature estimation is also proposed. Finally, the validity and effectiveness of the proposed method is confirmed by experimental results.......On-state collector-emitter voltage (VCE) is a good indicator to determine the wear-out condition of power device modules. Further, it is a one of the Temperature Sensitive Electrical Parameters (TSEPs) and thus can be used for junction temperature estimation. In this paper, the junction temperature...

  10. Augmented Cross-Sectional Prevalence Testing for Estimating HIV Incidence

    OpenAIRE

    Wang, R.; Lagakos, S. W.

    2010-01-01

    Estimation of an HIV incidence rate based on a cross-sectional sample of individuals evaluated with both a sensitive and less-sensitive diagnostic test offers important advantages to incidence estimation based on a longitudinal cohort study. However, the reliability of the cross-sectional approach has been called into question because of two major concerns. One is the difficulty in obtaining a reliable external approximation for the mean “window period” between detectability of HIV infection ...

  11. The great environmental restoration cost estimating shootout: A blind test of three DOE cost estimating groups

    International Nuclear Information System (INIS)

    Klemen, Paul

    1992-01-01

    The cost of the Department of Energy's (DOE) Environmental Restoration (ER) Program has increased steadily over the last three years and, in the process, has drawn increasing scrutiny from Congress, the public, and government agencies such as the Office of Management and Budget and the General Accounting Office. Programmatic costs have been reviewed by many groups from within the DOE as well as from outside agencies. While cost may appear to be a universally applicable barometer of project conditions, it is actually a single dimensional manifestation of a complex set of conditions. As such, variations in cost estimates can be caused by a variety of underlying factors such as changes in scope, schedule, performing organization, economic conditions, or regulatory environment. This paper will examine the subject of cost estimates by evaluating three different cost estimates prepared for a single project including two estimates prepared by project proponents and another estimate prepared by a review team. The paper identifies the reasons for cost growth as measured by the different estimates and evaluates the ability of review estimates to measure the validity of costs. The comparative technique used to test the three cost estimates will identify the reasons for changes in the estimated cost, over time, and evaluate the ability of an independent review to correctly identify the reasons for cost growth and evaluate the reasonableness of the cost proposed by the project proponents. Recommendations are made for improved cost estimates and improved cost estimate reviews. Conclusions are reached regarding the differences in estimate results that can be attributed to differences in estimating techniques, the implications of these differences for decision makers, and circumstances that are unique to environmental cost estimating. (author)

  12. An Accurate Method for Inferring Relatedness in Large Datasets of Unphased Genotypes via an Embedded Likelihood-Ratio Test

    KAUST Repository

    Rodriguez, Jesse M.

    2013-01-01

    Studies that map disease genes rely on accurate annotations that indicate whether individuals in the studied cohorts are related to each other or not. For example, in genome-wide association studies, the cohort members are assumed to be unrelated to one another. Investigators can correct for individuals in a cohort with previously-unknown shared familial descent by detecting genomic segments that are shared between them, which are considered to be identical by descent (IBD). Alternatively, elevated frequencies of IBD segments near a particular locus among affected individuals can be indicative of a disease-associated gene. As genotyping studies grow to use increasingly large sample sizes and meta-analyses begin to include many data sets, accurate and efficient detection of hidden relatedness becomes a challenge. To enable disease-mapping studies of increasingly large cohorts, a fast and accurate method to detect IBD segments is required. We present PARENTE, a novel method for detecting related pairs of individuals and shared haplotypic segments within these pairs. PARENTE is a computationally-efficient method based on an embedded likelihood ratio test. As demonstrated by the results of our simulations, our method exhibits better accuracy than the current state of the art, and can be used for the analysis of large genotyped cohorts. PARENTE\\'s higher accuracy becomes even more significant in more challenging scenarios, such as detecting shorter IBD segments or when an extremely low false-positive rate is required. PARENTE is publicly and freely available at http://parente.stanford.edu/. © 2013 Springer-Verlag.

  13. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    Science.gov (United States)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  14. The test-negative design for estimating influenza vaccine effectiveness.

    Science.gov (United States)

    Jackson, Michael L; Nelson, Jennifer C

    2013-04-19

    The test-negative design has emerged in recent years as the preferred method for estimating influenza vaccine effectiveness (VE) in observational studies. However, the methodologic basis of this design has not been formally developed. In this paper we develop the rationale and underlying assumptions of the test-negative study. Under the test-negative design for influenza VE, study subjects are all persons who seek care for an acute respiratory illness (ARI). All subjects are tested for influenza infection. Influenza VE is estimated from the ratio of the odds of vaccination among subjects testing positive for influenza to the odds of vaccination among subjects testing negative. With the assumptions that (a) the distribution of non-influenza causes of ARI does not vary by influenza vaccination status, and (b) VE does not vary by health care-seeking behavior, the VE estimate from the sample can generalized to the full source population that gave rise to the study sample. Based on our derivation of this design, we show that test-negative studies of influenza VE can produce biased VE estimates if they include persons seeking care for ARI when influenza is not circulating or do not adjust for calendar time. The test-negative design is less susceptible to bias due to misclassification of infection and to confounding by health care-seeking behavior, relative to traditional case-control or cohort studies. The cost of the test-negative design is the additional, difficult-to-test assumptions that incidence of non-influenza respiratory infections is similar between vaccinated and unvaccinated groups within any stratum of care-seeking behavior, and that influenza VE does not vary across care-seeking strata. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik

    1995-01-01

    Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...... and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the same values....... Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....

  16. Estimating Accurate Target Coordinates with Magnetic Resonance Images by Using Multiple Phase-Encoding Directions during Acquisition.

    Science.gov (United States)

    Kim, Minsoo; Jung, Na Young; Park, Chang Kyu; Chang, Won Seok; Jung, Hyun Ho; Chang, Jin Woo

    2018-06-01

    Stereotactic procedures are image guided, often using magnetic resonance (MR) images limited by image distortion, which may influence targets for stereotactic procedures. The aim of this work was to assess methods of identifying target coordinates for stereotactic procedures with MR in multiple phase-encoding directions. In 30 patients undergoing deep brain stimulation, we acquired 5 image sets: stereotactic brain computed tomography (CT), T2-weighted images (T2WI), and T1WI in both right-to-left (RL) and anterior-to-posterior (AP) phase-encoding directions. Using CT coordinates as a reference, we analyzed anterior commissure and posterior commissure coordinates to identify any distortion relating to phase-encoding direction. Compared with CT coordinates, RL-directed images had more positive x-axis values (0.51 mm in T1WI, 0.58 mm in T2WI). AP-directed images had more negative y-axis values (0.44 mm in T1WI, 0.59 mm in T2WI). We adopted 2 methods to predict CT coordinates with MR image sets: parallel translation and selective choice of axes according to phase-encoding direction. Both were equally effective at predicting CT coordinates using only MR; however, the latter may be easier to use in clinical settings. Acquiring MR in multiple phase-encoding directions and selecting axes according to the phase-encoding direction allows identification of more accurate coordinates for stereotactic procedures. © 2018 S. Karger AG, Basel.

  17. Development of Deep Learning Based Data Fusion Approach for Accurate Rainfall Estimation Using Ground Radar and Satellite Precipitation Products

    Science.gov (United States)

    Chen, H.; Chandra, C. V.; Tan, H.; Cifelli, R.; Xie, P.

    2016-12-01

    Rainfall estimation based on onboard satellite measurements has been an important topic in satellite meteorology for decades. A number of precipitation products at multiple time and space scales have been developed based upon satellite observations. For example, NOAA Climate Prediction Center has developed a morphing technique (i.e., CMORPH) to produce global precipitation products by combining existing space based rainfall estimates. The CMORPH products are essentially derived based on geostationary satellite IR brightness temperature information and retrievals from passive microwave measurements (Joyce et al. 2004). Although the space-based precipitation products provide an excellent tool for regional and global hydrologic and climate studies as well as improved situational awareness for operational forecasts, its accuracy is limited due to the sampling limitations, particularly for extreme events such as very light and/or heavy rain. On the other hand, ground-based radar is more mature science for quantitative precipitation estimation (QPE), especially after the implementation of dual-polarization technique and further enhanced by urban scale radar networks. Therefore, ground radars are often critical for providing local scale rainfall estimation and a "heads-up" for operational forecasters to issue watches and warnings as well as validation of various space measurements and products. The CASA DFW QPE system, which is based on dual-polarization X-band CASA radars and a local S-band WSR-88DP radar, has demonstrated its excellent performance during several years of operation in a variety of precipitation regimes. The real-time CASA DFW QPE products are used extensively for localized hydrometeorological applications such as urban flash flood forecasting. In this paper, a neural network based data fusion mechanism is introduced to improve the satellite-based CMORPH precipitation product by taking into account the ground radar measurements. A deep learning system is

  18. Cost estimate for a proposed GDF Suez LNG testing program

    Energy Technology Data Exchange (ETDEWEB)

    Blanchat, Thomas K.; Brady, Patrick Dennis; Jernigan, Dann A.; Luketa, Anay Josephine; Nissen, Mark R.; Lopez, Carlos; Vermillion, Nancy; Hightower, Marion Michael

    2014-02-01

    At the request of GDF Suez, a Rough Order of Magnitude (ROM) cost estimate was prepared for the design, construction, testing, and data analysis for an experimental series of large-scale (Liquefied Natural Gas) LNG spills on land and water that would result in the largest pool fires and vapor dispersion events ever conducted. Due to the expected cost of this large, multi-year program, the authors utilized Sandia's structured cost estimating methodology. This methodology insures that the efforts identified can be performed for the cost proposed at a plus or minus 30 percent confidence. The scale of the LNG spill, fire, and vapor dispersion tests proposed by GDF could produce hazard distances and testing safety issues that need to be fully explored. Based on our evaluations, Sandia can utilize much of our existing fire testing infrastructure for the large fire tests and some small dispersion tests (with some modifications) in Albuquerque, but we propose to develop a new dispersion testing site at our remote test area in Nevada because of the large hazard distances. While this might impact some testing logistics, the safety aspects warrant this approach. In addition, we have included a proposal to study cryogenic liquid spills on water and subsequent vaporization in the presence of waves. Sandia is working with DOE on applications that provide infrastructure pertinent to wave production. We present an approach to conduct repeatable wave/spill interaction testing that could utilize such infrastructure.

  19. An Accurate Method for Inferring Relatedness in Large Datasets of Unphased Genotypes via an Embedded Likelihood-Ratio Test

    KAUST Repository

    Rodriguez, Jesse M.; Batzoglou, Serafim; Bercovici, Sivan

    2013-01-01

    , accurate and efficient detection of hidden relatedness becomes a challenge. To enable disease-mapping studies of increasingly large cohorts, a fast and accurate method to detect IBD segments is required. We present PARENTE, a novel method for detecting

  20. Short-Cut Estimators of Criterion-Referenced Test Consistency.

    Science.gov (United States)

    Brown, James Dean

    1990-01-01

    Presents simplified methods for deriving estimates of the consistency of criterion-referenced, English-as-a-Second-Language tests, including (1) the threshold loss agreement approach using agreement or kappa coefficients, (2) the squared-error loss agreement approach using the phi(lambda) dependability approach, and (3) the domain score…

  1. IRT-Estimated Reliability for Tests Containing Mixed Item Formats

    Science.gov (United States)

    Shu, Lianghua; Schwarz, Richard D.

    2014-01-01

    As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…

  2. A Latent Class Approach to Estimating Test-Score Reliability

    Science.gov (United States)

    van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas

    2011-01-01

    This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…

  3. Unit Root Testing in Heteroscedastic Panels Using the Cauchy Estimator

    NARCIS (Netherlands)

    Demetrescu, Matei; Hanck, Christoph

    The Cauchy estimator of an autoregressive root uses the sign of the first lag as instrumental variable. The resulting IV t-type statistic follows a standard normal limiting distribution under a unit root case even under unconditional heteroscedasticity, if the series to be tested has no

  4. On Modal Parameter Estimates from Ambient Vibration Tests

    DEFF Research Database (Denmark)

    Agneni, A.; Brincker, Rune; Coppotelli, B.

    2004-01-01

    Modal parameter estimates from ambient vibration testing are turning into the preferred technique when one is interested in systems under actual loadings and operational conditions. Moreover, with this approach, expensive devices to excite the structure are not needed, since it can be adequately...

  5. Parametric change point estimation, testing and confidence interval ...

    African Journals Online (AJOL)

    In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect ...

  6. Estimation of Parameters of CCF with Staggered Testing

    International Nuclear Information System (INIS)

    Kim, Myung-Ki; Hong, Sung-Yull

    2006-01-01

    Common cause failures are extremely important in reliability analysis and would be dominant to risk contributor in a high reliable system such as a nuclear power plant. Of particular concern is common cause failure (CCF) that degrades redundancy or diversity implemented to improve a reliability of systems. Most of analyses of parameters of CCF models such as beta factor model, alpha factor model, and MGL(Multiple Greek Letters) model deal a system with a nonstaggered testing strategy. Non-staggered testing is that all components are tested at the same time (or at least the same shift) and staggered testing is that if there is a failure in the first component, all the other components are tested immediately, and if it succeeds, no more is done until the next scheduled testing time. Both of them are applied in the nuclear power plants. The strategy, however, is not explicitly described in the technical specifications, but implicitly in the periodic test procedure. For example, some redundant components particularly important to safety are being tested with staggered testing strategy. Others are being performed with non-staggered testing strategy. This paper presents the parameter estimator of CCF model such as beta factor model, MGL model, and alpha factor model with staggered testing strategy. In addition, a new CCF model, rho factor model, is proposed and its parameter is presented with staggered testing strategy

  7. A Single Test Combining Blood Markers and Elastography is More Accurate Than Other Fibrosis Tests in the Main Causes of Chronic Liver Diseases.

    Science.gov (United States)

    Ducancelle, Alexandra; Leroy, Vincent; Vergniol, Julien; Sturm, Nathalie; Le Bail, Brigitte; Zarski, Jean Pierre; Nguyen Khac, Eric; Salmon, Dominique; de Ledinghen, Victor; Calès, Paul

    2017-08-01

    International guidelines suggest combining a blood test and liver stiffness measurement (LSM) to stage liver fibrosis in chronic hepatitis C (CHC) and non-alcoholic fatty liver disease (NAFLD). Therefore, we compared the accuracies of these tests between the main etiologies of chronic liver diseases. Overall, 1968 patients were included in 5 etiologies: CHC: 698, chronic hepatitis B: 152, human immunodeficiency virus/CHC: 628, NAFLD: 225, and alcoholic liver disease (ALD): 265. Sixteen tests [13 blood tests, LSM (Fibroscan), 2 combined: FibroMeters] were evaluated. References were Metavir staging and CHC etiology. Accuracy was evaluated mainly with the Obuchowski index (OI) and accessorily with area under the receiver operating characteristics (F≥2, F≥3, cirrhosis). OIs in CHC were: FibroMeters: 0.812, FibroMeters: 0.785 to 0.797, Fibrotest: 0.762, CirrhoMeters: 0.756 to 0.771, LSM: 0.754, Hepascore: 0.752, FibroMeter: 0.750, aspartate aminotransferase platelet ratio index: 0.742, Fib-4: 0.741. In other etiologies, most tests had nonsignificant changes in OIs. In NAFLD, CHC-specific tests were more accurate than NAFLD-specific tests. The combined FibroMeters had significantly higher accuracy than their 2 constitutive tests (FibroMeters and LSM) in at least 1 diagnostic target in all etiologies, except in ALD where LSM had the highest OI, and in 3 diagnostic targets (OIs and 2 area under the receiver operating characteristics) in CHC and NAFLD. Some tests developed in CHC outperformed other tests in their specific etiologies. Tests combining blood markers and LSM outperformed single tests, validating recent guidelines and extending them to main etiologies. Noninvasive fibrosis evaluation can thus be simplified in the main etiologies by using a unique test: either LSM alone, especially in ALD, or preferably combined to blood markers.

  8. Estimation for a Weibull accelerated life testing model

    International Nuclear Information System (INIS)

    Glaser, R.E.

    1984-01-01

    It is sometimes reasonable to assume that the lifetime distribution of an item belongs to a certain parametric family, and that actual parameter values depend upon the testing environment of the item. In the two-parameter Weibull family setting, suppose both the shape and scale parameters are expressible as functions of the testing environment. For various models of functional dependency on environment, maximum likelihood methods are used to estimate characteristics of interest at specified environmental levels. The methodology presented handles exact, censored, and grouped data. A detailed accelerated life testing analysis of stress-rupture data for Kevlar/epoxy composites is given. 10 references, 1 figure, 2 tables

  9. Albuminuria and neck circumference are determinate factors of successful accurate estimation of glomerular filtration rate in high cardiovascular risk patients.

    Directory of Open Access Journals (Sweden)

    Po-Jen Hsiao

    Full Text Available Estimated glomerular filtration rate (eGFR is used for diagnosis of chronic kidney disease (CKD. The eGFR models based on serum creatinine or cystatin C are used more in clinical practice. Albuminuria and neck circumference are associated with CKD and may have correlations with eGFR.We explored the correlations and modelling formulates among various indicators such as serum creatinine, cystatin C, albuminuria, and neck circumference for eGFR.Cross-sectional study.We reviewed the records of patients with high cardiovascular risk from 2010 to 2011 in Taiwan. 24-hour urine creatinine clearance was used as the standard. We utilized a decision tree to select for variables and adopted a stepwise regression method to generate five models. Model 1 was based on only serum creatinine and was adjusted for age and gender. Model 2 added serum cystatin C, models 3 and 4 added albuminuria and neck circumference, respectively. Model 5 simultaneously added both albuminuria and neck circumference.Total 177 patients were recruited in this study. In model 1, the bias was 2.01 and its precision was 14.04. In model 2, the bias was reduced to 1.86 with a precision of 13.48. The bias of model 3 was 1.49 with a precision of 12.89, and the bias for model 4 was 1.74 with a precision of 12.97. In model 5, the bias could be lower to 1.40 with a precision of 12.53.In this study, the predicting ability of eGFR was improved after the addition of serum cystatin C compared to serum creatinine alone. The bias was more significantly reduced by the calculation of albuminuria. Furthermore, the model generated by combined albuminuria and neck circumference could provide the best eGFR predictions among these five eGFR models. Neck circumference can be investigated potentially in the further studies.

  10. Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations

    Directory of Open Access Journals (Sweden)

    H. Nakajima

    2006-01-01

    Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.

  11. Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations

    Science.gov (United States)

    Nakajima, H.; Stadler, A. T.

    2006-10-01

    Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.

  12. Is an alcoholic fixative fluid used for manual liquid-based cytology accurate to perform HPV tests?

    Directory of Open Access Journals (Sweden)

    Garbar C

    2011-12-01

    Full Text Available Christian Garbar1, Corinne Mascaux1, Philippe De Graeve2, Philippe Delvenne31Department of Biopathology, Institute Jean Godinot, Reims Cedex, France; 2Centre de Pathologie des Coteaux, Toulouse, France; 3Department of Pathology, University of Liege, Tour de Pathologie, Domaine Universitaire du Sart Tilman, Liège, BelgiumAbstract: In Europe, the alternative centrifuge method of liquid-based cytology is widely used in cervical screening. Turbitec® (Labonord SAS, Templemars, France is a centrifuge method of liquid-based cytology using an alcoholic fixative fluid, Easyfix® (Labonord. It is now well accepted that the association of liquid-based cytology and human papillomavirus test is indissociable of cervical screening. The aim of this work was to demonstrate that Easyfix alcoholic fluid is reliable to perform Hybrid Capture® 2 (QIAGEN SAS, Courtaboeuf, France. In this study, 75 patients with colposcopy for cervical lesions served as gold standard. A sample was collected, at random, for Easyfix fixative cytological fluid and for Digene Cervical Sampler (QIAGEN. The results of Hybrid Capture 2 (with relative light unit >1 showed no statistical difference, a positive Spearman’s correlation (r = 0.82, P < 0.0001, and a kappa value of 0.87 (excellent agreement between the two fluids. It was concluded that Easyfix is accurate to use in human papillomavirus tests with Hybrid Capture 2.Keywords: human papillomavirus, hybrid capture 2, Turbitec®, cervix cytology, liquid-based cytology

  13. Accurate diagnosis of myalgic encephalomyelitis and chronic fatigue syndrome based upon objective test methods for characteristic symptoms

    Science.gov (United States)

    Twisk, Frank NM

    2015-01-01

    Although myalgic encephalomyelitis (ME) and chronic fatigue syndrome (CFS) are considered to be synonymous, the definitional criteria for ME and CFS define two distinct, partially overlapping, clinical entities. ME, whether defined by the original criteria or by the recently proposed criteria, is not equivalent to CFS, let alone a severe variant of incapacitating chronic fatigue. Distinctive features of ME are: muscle weakness and easy muscle fatigability, cognitive impairment, circulatory deficits, a marked variability of the symptoms in presence and severity, but above all, post-exertional “malaise”: a (delayed) prolonged aggravation of symptoms after a minor exertion. In contrast, CFS is primarily defined by (unexplained) chronic fatigue, which should be accompanied by four out of a list of 8 symptoms, e.g., headaches. Due to the subjective nature of several symptoms of ME and CFS, researchers and clinicians have questioned the physiological origin of these symptoms and qualified ME and CFS as functional somatic syndromes. However, various characteristic symptoms, e.g., post-exertional “malaise” and muscle weakness, can be assessed objectively using well-accepted methods, e.g., cardiopulmonary exercise tests and cognitive tests. The objective measures acquired by these methods should be used to accurately diagnose patients, to evaluate the severity and impact of the illness objectively and to assess the positive and negative effects of proposed therapies impartially. PMID:26140274

  14. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    Directory of Open Access Journals (Sweden)

    Pricila da Silva Cunha

    2014-01-01

    Full Text Available Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH, which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH, and/or multiplex ligation-dependent probe amplification (MLPA all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  15. Ingestion of Nevada Test Site Fallout: Internal dose estimates

    International Nuclear Information System (INIS)

    Whicker, F.W.; Kirchner, T.B.; Anspaugh, L.R.

    1996-01-01

    This paper summarizes individual and collective dose estimates for the internal organs of hypothetical yet representative residents of selected communities that received measurable fallout from nuclear detonations at the Nevada Test Site. The doses, which resulted from ingestion of local and regional food products contaminated with over 20 radionuclides, were estimated with use of the PATHWAY food-chain-transport model to provide estimates of central tendency and uncertainty. The thyroid gland received much higher doses than other internal organs and tissues. In a avery few cases, infants might have received thyroid doses in excess of 1 Gy, depending on location, diet, and timing of fallout. 131 I was the primary thyroid dose contributor, and fresh milk was the main exposure pathway. With the exception of the thyroid, organ doses from the ingestion pathway were much smaller (<3%) than those from external gamma exposure to deposited fallout. Doses to residents living closest to the Nevada Test Site were contributed mainly by a few fallout events; doses to more distantly located people were generally smaller, but a greater number of events provided measurable contributions. The effectiveness of different fallout events in producing internal organ doses through ingestion varied dramatically with seasonal timing of the test, with maximum dose per unit fallout occurring for early summer depositions when milk cows were on pasture and fresh, local vegetables were used. Within specific communities, internal doses differed by age, sex, and lifestyle. Collective internal dose estimates for specific geographic areas are provided

  16. Adaptive estimation of the electromotive force of the lithium-ion battery after current interruption for an accurate state-of-charge and capacity determination

    International Nuclear Information System (INIS)

    Waag, Wladislaw; Sauer, Dirk Uwe

    2013-01-01

    Highlights: • New adaptive approach for the EMF estimation. • The EMF is estimated by observing the voltage change after the current interruption. • The approach enables an accurate SoC and capacity determination. • Real-time capable algorithm. - Abstract: The online estimation of battery states and parameters is one of the challenging tasks when battery is used as a part of the pure electric or hybrid energy system. For the determination of the available energy stored in the battery, the knowledge of the present state-of-charge (SOC) and capacity of the battery is required. For SOC and capacity determination often the estimation of the battery electromotive force (EMF) is employed. The electromotive force can be measured as an open circuit voltage (OCV) of the battery when a significant time has elapsed since the current interruption. This time may take up to some hours for lithium-ion batteries and is needed to eliminate the influence of the diffusion overvoltages. This paper proposes a new approach to estimate the EMF by considering the OCV relaxation process within only some first minutes after the current interruption. The approach is based on an online fitting of an OCV relaxation model to the measured OCV relaxation curve. This model is based on an equivalent circuit consisting of a voltage source (represents the EMF) in series with the parallel connection of the resistance and a constant phase element (CPE). Based on this fitting the model parameters are determined and the EMF is estimated. The application of this method is exemplarily demonstrated for the state-of-charge and capacity estimation of the lithium-ion battery in an electrical vehicle. In the presented example the battery capacity is determined with the maximal inaccuracy of 2% using the EMF estimated at two different levels of state-of-charge. The real-time capability of the proposed algorithm is proven by its implementation on a low-cost 16-bit microcontroller (Infineon XC2287)

  17. Influence of different open circuit voltage tests on state of charge online estimation for lithium-ion batteries

    International Nuclear Information System (INIS)

    Zheng, Fangdan; Xing, Yinjiao; Jiang, Jiuchun; Sun, Bingxiang; Kim, Jonghoon; Pecht, Michael

    2016-01-01

    Highlights: • Two common tests for observing battery open circuit voltage performance are compared. • The temperature dependency of the OCV-SOC relationship is investigated. • Two estimators are evaluated in terms of accuracy and robustness for estimating battery SOC. • The incremental OCV test is better to predetermine the OCV-SOCs for SOC online estimation. - Abstract: Battery state of charge (SOC) estimation is a crucial function of battery management systems (BMSs), since accurate estimated SOC is critical to ensure the safety and reliability of electric vehicles. A widely used technique for SOC estimation is based on online inference of battery open circuit voltage (OCV). Low-current OCV and incremental OCV tests are two common methods to observe the OCV-SOC relationship, which is an important element of the SOC estimation technique. In this paper, two OCV tests are run at three different temperatures and based on which, two SOC estimators are compared and evaluated in terms of tracking accuracy, convergence time, and robustness for online estimating battery SOC. The temperature dependency of the OCV-SOC relationship is investigated and its influence on SOC estimation results is discussed. In addition, four dynamic tests are presented, one for estimator parameter identification and the other three for estimator performance evaluation. The comparison results show that estimator 2 (based on the incremental OCV test) has higher tracking accuracy and is more robust against varied loading conditions and different initial values of SOC than estimator 1 (based on the low-current OCV test) with regard to ambient temperature. Therefore, the incremental OCV test is recommended for predetermining the OCV-SOCs for battery SOC online estimation in BMSs.

  18. Accurate source location from waves scattered by surface topography: Applications to the Nevada and North Korean test sites

    Science.gov (United States)

    Shen, Y.; Wang, N.; Bao, X.; Flinders, A. F.

    2016-12-01

    Scattered waves generated near the source contains energy converted from the near-field waves to the far-field propagating waves, which can be used to achieve location accuracy beyond the diffraction limit. In this work, we apply a novel full-wave location method that combines a grid-search algorithm with the 3D Green's tensor database to locate the Non-Proliferation Experiment (NPE) at the Nevada test site and the North Korean nuclear tests. We use the first arrivals (Pn/Pg) and their immediate codas, which are likely dominated by waves scattered at the surface topography near the source, to determine the source location. We investigate seismograms in the frequency of [1.0 2.0] Hz to reduce noises in the data and highlight topography scattered waves. High resolution topographic models constructed from 10 and 90 m grids are used for Nevada and North Korea, respectively. The reference velocity model is based on CRUST 1.0. We use the collocated-grid finite difference method on curvilinear grids to calculate the strain Green's tensor and obtain synthetic waveforms using source-receiver reciprocity. The `best' solution is found based on the least-square misfit between the observed and synthetic waveforms. To suppress random noises, an optimal weighting method for three-component seismograms is applied in misfit calculation. Our results show that the scattered waves are crucial in improving resolution and allow us to obtain accurate solutions with a small number of stations. Since the scattered waves depends on topography, which is known at the wavelengths of regional seismic waves, our approach yields absolute, instead of relative, source locations. We compare our solutions with those of USGS and other studies. Moreover, we use differential waveforms to locate pairs of the North Korea tests from years 2006, 2009, 2013 and 2016 to further reduce the effects of unmodeled heterogeneities and errors in the reference velocity model.

  19. Parameters estimation for reactive transport: A way to test the validity of a reactive model

    Science.gov (United States)

    Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme

    The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.

  20. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  1. Field Test of Gopher Tortoise (Gopherus Polyphemus) Population Estimation Techniques

    Science.gov (United States)

    2008-04-01

    Web (WWW) at URL: http://www.cecer.army.mil ERDC/CERL TR-08-7 4 2 Field Tests The gopher tortoise is a species of conservation concern in the... ncv D ⎛ ⎞⎛ ⎞ = ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ (4) where: L = estimate of line length to be sampled b = dispersion parameter 2ˆ( )tcv D = desired coefficient of

  2. Estimation of soil properties and free product volume from baildown tests

    International Nuclear Information System (INIS)

    Zhu, J.L.; Parker, J.C.; Lundy, D.A.; Zimmerman, L.M.

    1993-01-01

    Baildown tests, involving measurement of water and free product levels in a monitoring well after bailing, are often performed at spill sites to estimate the oil volume per unit area -- which the authors refer to as ''oil specific volume.'' Spill volume is estimated by integrating oil specific volume over the areal domain of the spill. Existing methods for interpreting baildown tests are based on grossly simplistic approximations of soil capillary properties that cannot accurately describe the transient well response. A model for vertical equilibrium oil distributions based on the van Genuchten capillary model has been documented and verified in the laboratory and in the field by various authors. The model enables oil specific volume and oil transmissivity to be determined as functions of well product thickness. This paper describes a method for estimating van Genuchten capillary parameters, as well as aquifer hydraulic conductivity, from baildown tests. The results yield the relationships of oil specific volume and oil transmissivity to apparent product thickness, which may be used, in turn, to compute spill volume and to model free product plume movement and free product recovery. The method couples a finite element model for radial flow of oil and water to a well with a nonlinear parameter estimation algorithm. Effects of the filter pack around the well in the fluid level response are considered explicitly by the model. The method, which is implemented in the program BAILTEST, is applied to field data from baildown tests. The results indicate that hydrographs of water and oil levels are accurately described by the model

  3. The TiltMeter app is a novel and accurate measurement tool for the weight bearing lunge test.

    Science.gov (United States)

    Williams, Cylie M; Caserta, Antoni J; Haines, Terry P

    2013-09-01

    The weight bearing lunge test is increasing being used by health care clinicians who treat lower limb and foot pathology. This measure is commonly established accurately and reliably with the use of expensive equipment. This study aims to compare the digital inclinometer with a free app, TiltMeter on an Apple iPhone. This was an intra-rater and inter-rater reliability study. Two raters (novice and experienced) conducted the measurements in both a bent knee and straight leg position to determine the intra-rater and inter-rater reliability. Concurrent validity was also established. Allied health practitioners were recruited as participants from the workplace. A preconditioning stretch was conducted and the ankle range of motion was established with the weight bearing lunge test position with firstly the leg straight and secondly with the knee bent. The measurement device and each participant were randomised during measurement. The intra-rater reliability and inter-rater reliability for the devices and in both positions were all over ICC 0.8 except for one intra-rater measure (Digital inclinometer, novice, ICC 0.65). The inter-rater reliability between the digital inclinometer and the tilmeter was near perfect, ICC 0.96 (CI: 0.898-0.983); Concurrent validity ICC between the two devices was 0.83 (CI: -0.740 to 0.445). The use of the Tiltmeter app on the iPhone is a reliable and inexpensive tool to measure the available ankle range of motion. Health practitioners should use caution in applying these findings to other smart phone equipment if surface areas are not comparable. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  4. Are rapid diagnostic tests more accurate in diagnosis of plasmodium falciparum malaria compared to microscopy at rural health centres?

    Directory of Open Access Journals (Sweden)

    Magnussen Pascal

    2010-12-01

    Full Text Available Abstract Background Prompt, accurate diagnosis and treatment with artemisinin combination therapy remains vital to current malaria control. Blood film microscopy the current standard test for diagnosis of malaria has several limitations that necessitate field evaluation of alternative diagnostic methods especially in low income countries of sub-Saharan Africa where malaria is endemic. Methods The accuracy of axillary temperature, health centre (HC microscopy, expert microscopy and a HRP2-based rapid diagnostic test (Paracheck was compared in predicting malaria infection using polymerase chain reaction (PCR as the gold standard. Three hundred patients with a clinical suspicion of malaria based on fever and or history of fever from a low and high transmission setting in Uganda were consecutively enrolled and provided blood samples for all tests. Accuracy of each test was calculated overall with 95% confidence interval and then adjusted for age-groups and level of transmission intensity using a stratified analysis. The endpoints were: sensitivity, specificity, positive predictive value (PPV and negative predictive value (NPV. This study is registered with Clinicaltrials.gov, NCT00565071. Results Of the 300 patients, 88(29.3% had fever, 56(18.7% were positive by HC microscopy, 47(15.7% by expert microscopy, 110(36.7% by Paracheck and 89(29.7% by PCR. The overall sensitivity >90% was only shown by Paracheck 91.0% [95%CI: 83.1-96.0]. The sensitivity of expert microscopy was 46%, similar to HC microscopy. The superior sensitivity of Paracheck compared to microscopy was maintained when data was stratified for transmission intensity and age. The overall specificity rates were: Paracheck 86.3% [95%CI: 80.9-90.6], HC microscopy 93.4% [95%CI: 89.1-96.3] and expert microscopy 97.2% [95%CI: 93.9-98.9]. The NPV >90% was shown by Paracheck 95.8% [95%CI: 91.9-98.2]. The overall PPV was Conclusion The HRP2-based RDT has shown superior sensitivity compared to

  5. How accurate are adolescents in portion-size estimation using the computer tool Young Adolescents' Nutrition Assessment on Computer (YANA-C)?

    Science.gov (United States)

    Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea

    2010-06-01

    Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amounts of ten commonly consumed foods (breakfast cereals, French fries, pasta, rice, apple sauce, carrots and peas, crisps, creamy velouté, red cabbage, and peas). Two procedures were followed: (1) short-term recall: adolescents (n 73) self-served their usual portions of the ten foods and estimated the amounts later the same day; (2) real-time perception: adolescents (n 128) estimated two sets (different portions) of pre-weighed portions displayed near the computer. Self-served portions were, on average, 8 % underestimated; significant underestimates were found for breakfast cereals, French fries, peas, and carrots and peas. Spearman's correlations between the self-served and estimated weights varied between 0.51 and 0.84, with an average of 0.72. The kappa statistics were moderate (>0.4) for all but one item. Pre-weighed portions were, on average, 15 % underestimated, with significant underestimates for fourteen of the twenty portions. Photographs of food items can serve as a good aid in ranking subjects; however, to assess the actual intake at a group level, underestimation must be considered.

  6. Person fit for test speededness: normal curvatures, likelihood ratio tests and empirical Bayes estimates

    NARCIS (Netherlands)

    Goegebeur, Y.; de Boeck, P.; Molenberghs, G.

    2010-01-01

    The local influence diagnostics, proposed by Cook (1986), provide a flexible way to assess the impact of minor model perturbations on key model parameters’ estimates. In this paper, we apply the local influence idea to the detection of test speededness in a model describing nonresponse in test data,

  7. Statistical Estimation of Heterogeneities: A New Frontier in Well Testing

    Science.gov (United States)

    Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.

    2001-12-01

    Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.

  8. Applicability of Avery's coupled reactor theory to estimate subcriticality of test region in two region system

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    1992-01-01

    The author examined the validity to estimate the subcriticality of a test region in a coupled reactor system using only measurable quantities on the basis of Avery's coupled reactor theory. For the purpose, we analyzed coupled reactor experiments performed at the Tank-type Critical Assembly in Japan Atomic Energy Research Institute by using two region systems and evaluated the subcriticality of the test region through a numerical study. Coupling coefficients were redefined at the quasi-static state because their definitions by Avery were not clear. With the coupling coefficients obtained by the numerical calculation, the multiplication factor of the test region was evaluated by two formulas; one for the evaluation using only the measurable quantities and the other for the accurate evaluation which contains the terms dropped in the former formula by assuming the unchangeableness for the perturbation induced in a driver region. From the comparison between the results of the evaluations, it was found that the estimation using only the measurable quantities is valid only for the coupled reactor system where the subcriticality of the test region was very small within a few dollars in reactivity. Consequently, it is concluded that the estimation using only the measurable quantities is not applicable to a general coupled reactor system. (author)

  9. Supporting Accurate Interpretation of Self-Administered Medical Test Results for Mobile Health: Assessment of Design, Demographics, and Health Condition.

    Science.gov (United States)

    Hohenstein, Jess C; Baumer, Eric Ps; Reynolds, Lindsay; Murnane, Elizabeth L; O'Dell, Dakota; Lee, Seoho; Guha, Shion; Qi, Yu; Rieger, Erin; Gay, Geri

    2018-02-28

    Technological advances in personal informatics allow people to track their own health in a variety of ways, representing a dramatic change in individuals' control of their own wellness. However, research regarding patient interpretation of traditional medical tests highlights the risks in making complex medical data available to a general audience. This study aimed to explore how people interpret medical test results, examined in the context of a mobile blood testing system developed to enable self-care and health management. In a preliminary investigation and main study, we presented 27 and 303 adults, respectively, with hypothetical results from several blood tests via one of the several mobile interface designs: a number representing the raw measurement of the tested biomarker, natural language text indicating whether the biomarker's level was low or high, or a one-dimensional chart illustrating this level along a low-healthy axis. We measured respondents' correctness in evaluating these results and their confidence in their interpretations. Participants also told us about any follow-up actions they would take based on the result and how they envisioned, generally, using our proposed personal health system. We find that a majority of participants (242/328, 73.8%) were accurate in their interpretations of their diagnostic results. However, 135 of 328 participants (41.1%) expressed uncertainty and confusion about their ability to correctly interpret these results. We also find that demographics and interface design can impact interpretation accuracy, including false confidence, which we define as a respondent having above average confidence despite interpreting a result inaccurately. Specifically, participants who saw a natural language design were the least likely (421.47 times, P=.02) to exhibit false confidence, and women who saw a graph design were less likely (8.67 times, P=.04) to have false confidence. On the other hand, false confidence was more likely

  10. A test for Improvement of high resolution Quantitative Precipitation Estimation for localized heavy precipitation events

    Science.gov (United States)

    Lee, Jung-Hoon; Roh, Joon-Woo; Park, Jeong-Gyun

    2017-04-01

    Accurate estimation of precipitation is one of the most difficult and significant tasks in the area of weather diagnostic and forecasting. In the Korean Peninsula, heavy precipitations are caused by various physical mechanisms, which are affected by shortwave trough, quasi-stationary moisture convergence zone among varying air masses, and a direct/indirect effect of tropical cyclone. In addition to, various geographical and topographical elements make production of temporal and spatial distribution of precipitation is very complicated. Especially, localized heavy rainfall events in South Korea generally arise from mesoscale convective systems embedded in these synoptic scale disturbances. In weather radar data with high temporal and spatial resolution, accurate estimation of rain rate from radar reflectivity data is too difficult. Z-R relationship (Marshal and Palmer 1948) have adapted representatively. In addition to, several methods such as support vector machine (SVM), neural network, Fuzzy logic, Kriging were utilized in order to improve the accuracy of rain rate. These methods show the different quantitative precipitation estimation (QPE) and the performances of accuracy are different for heavy precipitation cases. In this study, in order to improve the accuracy of QPE for localized heavy precipitation, ensemble method for Z-R relationship and various techniques was tested. This QPE ensemble method was developed by a concept based on utilizing each advantage of precipitation calibration methods. The ensemble members were produced for a combination of different Z-R coefficient and calibration method.

  11. Fatigue tests and life estimation of Incoloy alloy 908

    International Nuclear Information System (INIS)

    Feng, J.; Toma, L.S.; Jang, C.H.; Steeves, M.M.

    1997-01-01

    Incoloy reg-sign alloy 908* is a candidate conduit material for Nb 3 Sn cable-in-conduit superconductors. The conduit is expected to experience cyclic loads at 4 K. Fatigue fracture of the conduit is one possible failure mode. So far, fatigue life has been estimated from fatigue crack growth data, which provide conservative results. The more traditional practice of life estimation using S-N curves has not been done for alloy 908 due to a lack of data at room and cryogenic temperatures. This paper presents a series of fatigue test results in response to this need. Tests were performed in reversed bending, rotating bending, and uniaxial fatigue machines. The test matrix included different heat treatments, two load ratios (R=-1 and 0.1), two temperatures (298 and 77 K), and two orientations (longitudinal and transverse). As expected, there is a semi-log linear relation between the applied stress and fatigue life above an applied stress (e.g., 310 MPa for tests at 298 K and R=-1). Below this stress the curves show an endurance limit. The aged and cold-worked materials have longer fatigue lives and higher endurance limits than the others. Different orientations have no apparent effect on life. Cryogenic temperature results in a much high fatigue life than room temperature. A higher tensile mean stress gives shorter fatigue life. It was also found that the fatigue lives of the reversed bending specimens were of the same order as those of the uniaxial test specimens, but were only half the lives of the rotating bending specimens for given stresses. A sample application of the S-N data is discussed

  12. HLA-DQ-Gluten Tetramer Blood Test Accurately Identifies Patients With and Without Celiac Disease in Absence of Gluten Consumption.

    Science.gov (United States)

    Sarna, Vikas K; Lundin, Knut E A; Mørkrid, Lars; Qiao, Shuo-Wang; Sollid, Ludvig M; Christophersen, Asbjørn

    2018-03-01

    Celiac disease is characterized by HLA-DQ2/8-restricted responses of CD4+ T cells to cereal gluten proteins. A diagnosis of celiac disease based on serologic and histologic evidence requires patients to be on gluten-containing diets. The growing number of individuals adhering to a gluten-free diet (GFD) without exclusion of celiac disease complicates its detection. HLA-DQ-gluten tetramers can be used to detect gluten-specific T cells in blood of patients with celiac disease, even if they are on a GFD. We investigated whether an HLA-DQ-gluten tetramer-based assay accurately identifies patients with celiac disease. We produced HLA-DQ-gluten tetramers and added them to peripheral blood mononuclear cells isolated from 143 HLA-DQ2.5 + subjects (62 subjects with celiac disease on a GFD, 19 subjects without celiac disease on a GFD [due to self-reported gluten sensitivity], 10 subjects with celiac disease on a gluten-containing diet, and 52 presumed healthy individuals [controls]). T cells that bound HLA-DQ-gluten tetramers were quantified by flow cytometry. Laboratory tests and flow cytometry gating analyses were performed by researchers blinded to sample type, except for samples from subjects with celiac disease on a gluten-containing diet. Test precision analyses were performed using samples from 10 subjects. For the HLA-DQ-gluten tetramer-based assay, we combined flow-cytometry variables in a multiple regression model that identified individuals with celiac disease on a GFD with an area under the receiver operating characteristic curve value of 0.96 (95% confidence interval [CI] 0.89-1.00) vs subjects without celiac disease on a GFD. The assay detected individuals with celiac disease on a gluten-containing diet vs controls with an area under the receiver operating characteristic curve value of 0.95 (95% CI 0.90-1.00). Optimized cutoff values identified subjects with celiac disease on a GFD with 97% sensitivity (95% CI 0.92-1.00) and 95% specificity (95% CI 0

  13. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Energy Technology Data Exchange (ETDEWEB)

    Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)

    2007-10-15

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  14. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Science.gov (United States)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  15. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    International Nuclear Information System (INIS)

    Rybynok, V O; Kyriacou, P A

    2007-01-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  16. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki; Park, Kihong; Alouini, Mohamed-Slim

    2017-01-01

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  17. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki

    2017-07-28

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  18. Explaining behavior change after genetic testing: the problem of collinearity between test results and risk estimates.

    Science.gov (United States)

    Fanshawe, Thomas R; Prevost, A Toby; Roberts, J Scott; Green, Robert C; Armstrong, David; Marteau, Theresa M

    2008-09-01

    This paper explores whether and how the behavioral impact of genotype disclosure can be disentangled from the impact of numerical risk estimates generated by genetic tests. Secondary data analyses are presented from a randomized controlled trial of 162 first-degree relatives of Alzheimer's disease (AD) patients. Each participant received a lifetime risk estimate of AD. Control group estimates were based on age, gender, family history, and assumed epsilon4-negative apolipoprotein E (APOE) genotype; intervention group estimates were based upon the first three variables plus true APOE genotype, which was also disclosed. AD-specific self-reported behavior change (diet, exercise, and medication use) was assessed at 12 months. Behavior change was significantly more likely with increasing risk estimates, and also more likely, but not significantly so, in epsilon4-positive intervention group participants (53% changed behavior) than in control group participants (31%). Intervention group participants receiving epsilon4-negative genotype feedback (24% changed behavior) and control group participants had similar rates of behavior change and risk estimates, the latter allowing assessment of the independent effects of genotype disclosure. However, collinearity between risk estimates and epsilon4-positive genotypes, which engender high-risk estimates, prevented assessment of the independent effect of the disclosure of an epsilon4 genotype. Novel study designs are proposed to determine whether genotype disclosure has an impact upon behavior beyond that of numerical risk estimates.

  19. Comparison of methods for estimating the cost of human immunodeficiency virus-testing interventions.

    Science.gov (United States)

    Shrestha, Ram K; Sansom, Stephanie L; Farnham, Paul G

    2012-01-01

    The Centers for Disease Control and Prevention (CDC), Division of HIV/AIDS Prevention, spends approximately 50% of its $325 million annual human immunodeficiency virus (HIV) prevention funds for HIV-testing services. An accurate estimate of the costs of HIV testing in various settings is essential for efficient allocation of HIV prevention resources. To assess the costs of HIV-testing interventions using different costing methods. We used the microcosting-direct measurement method to assess the costs of HIV-testing interventions in nonclinical settings, and we compared these results with those from 3 other costing methods: microcosting-staff allocation, where the labor cost was derived from the proportion of each staff person's time allocated to HIV testing interventions; gross costing, where the New York State Medicaid payment for HIV testing was used to estimate program costs, and program budget, where the program cost was assumed to be the total funding provided by Centers for Disease Control and Prevention. Total program cost, cost per person tested, and cost per person notified of new HIV diagnosis. The median costs per person notified of a new HIV diagnosis were $12 475, $15 018, $2697, and $20 144 based on microcosting-direct measurement, microcosting-staff allocation, gross costing, and program budget methods, respectively. Compared with the microcosting-direct measurement method, the cost was 78% lower with gross costing, and 20% and 61% higher using the microcosting-staff allocation and program budget methods, respectively. Our analysis showed that HIV-testing program cost estimates vary widely by costing methods. However, the choice of a particular costing method may depend on the research question being addressed. Although program budget and gross-costing methods may be attractive because of their simplicity, only the microcosting-direct measurement method can identify important determinants of the program costs and provide guidance to improve

  20. Accuracy of Non-Destructive Testing of PBRs to Estimate Fragilities

    Science.gov (United States)

    Brune, J. N.; Brune, R.; Biasi, G. P.; Anooshehpoor, R.; Purvance, M.

    2011-12-01

    Prior studies of Precariously Balanced Rocks (PBRs) have involved various methods of documenting rock shapes and fragilities. These have included non-destructive testing (NDT) methods such as photomodeling, and potentially destructive testing (PDT) such as forced tilt tests. PDT methods usually have the potential of damaging or disturbing the rock or its pedestal so that the PBR usefulness for future generations is compromised. To date we have force-tilt tested approximately 28 PBRs, and of these we believe 7 have been compromised. We suggest here that given other inherent uncertainties in the current methodologies, NDT methods are now sufficiently advanced as to be adequate for the current state of the art use for comparison with Ground Motion Prediction Equations (GMPEs) and seismic hazard maps (SHMs). Here we compare tilt-test static toppling estimates to three non-destructive methods: (1) 3-D photographic modeling (2) profile analysis assuming the rock is 2-D, and (3) expert judgments from photographs. 3-D modeling uses the commercial Photomodeler program and photographs in the field taken from numerous directions around the rock. The output polyhedral shape is analyzed in Matlab determine the center of mass and in Autocad to estimate the static overturning angle alpha. For the 2-D method we chose the photograph in profile looking perpendicular to the estimated direction of toppling. The rock is outlined as a 2-D object in Matlab. Rock dimensions, rocking points, and a vertical reference are supplied by the photo analyst to estimate the center of gravity and static force overturning angles. For the expert opinion method we used additional photographs taken from different directions to improve the estimates of the center of mass and the rocking points. We used 7 rocks for comparisons. The error in estimating tan alpha from 3-D modeling is about 0.05. For 2-D estimates an average error is about 0.1 (?). For expert opinion estimates the error is about 0.06. For

  1. Estimating Hemodynamic Responses to the Wingate Test Using Thoracic Impedance

    Directory of Open Access Journals (Sweden)

    Todd A. Astorino, Curtis Bovee, Ashley DeBoe

    2015-12-01

    Full Text Available Techniques including direct Fick and Doppler echocardiography are frequently used to assess hemodynamic responses to exercise. Thoracic impedance has been shown to be a noninvasive alternative to these methods for assessing these responses during graded exercise to exhaustion, yet its feasibility during supramaximal bouts of exercise is relatively unknown. We used thoracic impedance to estimate stroke volume (SV and cardiac output (CO during the Wingate test (WAnT and compared these values to those from graded exercise testing (GXT. Active men (n = 9 and women (n = 7 (mean age = 24.8 ± 5.9 yr completed two Wingate tests and two graded exercise tests on a cycle ergometer. During exercise, heart rate (HR, SV, and CO were continuously estimated using thoracic impedance. Repeated measures analysis of variance was used to identify potential differences in hemodynamic responses across protocols. Results: Maximal SV (138.6 ± 37.4 mL vs. 135.6 ± 26.9 mL and CO (24.5 ± 6.1 L·min-1 vs. 23.7 ± 5.1 L·min-1 were similar (p > 0.05 between repeated Wingate tests. Mean maximal HR was higher (p < 0.01 for GXT (185 ± 7 b·min-1 versus WAnT (177 ± 11 b·min-1, and mean SV was higher in response to WAnT (137.1 ± 32.1 mL versus GXT (123.0 ± 32.0 mL, leading to similar maximal cardiac output between WAnT and GXT (23.9 ± 5.6 L·min-1 vs. 22.5 ± 6.0 L·min-1. Our data show no difference in hemodynamic responses in response to repeated administrations of the Wingate test. In addition, the Wingate test elicits similar cardiac output compared to progressive cycling to VO2max.

  2. Accurate measuring of cross-sections for e+e- → hadrons: Testing the Standard Model and applications to QCD

    International Nuclear Information System (INIS)

    Malaescu, B.

    2010-01-01

    The scope of this thesis is to obtain and use accurate data on e + e - annihilation into hadrons at energies of 1 GeV of magnitude order. These data represent a very valuable input for Standard Model tests involving vacuum polarization, such as the comparison of the muon magnetic moment to theory, and for QCD tests and applications. The different parts of this thesis describe four aspects of my work in this context. First, the measurements of cross sections as a function of energy necessitate the unfolding of data spectra from detector effects. I have proposed a new iterative unfolding method for experimental data, with improved capabilities compared to existing tools. Secondly, the experimental core of this thesis is a study of the process e + e - → K + K - from threshold to 5 GeV using the initial state radiation (ISR) method (through the measurement of e + e - → K + K - γ) with the BABAR detector. All relevant efficiencies are measured with experimental data and the absolute normalization comes from the simultaneously measured μμγ process. I have performed the full analysis which achieves a systematic uncertainty of 0.7% on the dominant φ resonance. Results on e + e - → π + π - from threshold to 3 GeV are also presented. Thirdly, a comparison based on 2 different ways to get a prediction of the muon magnetic moment: the Standard Model and the hadronic tau decay, shows an interesting hint for new physics effects (3.2 σ effect). Fourthly, QCD sum rules are powerful tools for obtaining precise information on QCD parameters, such as the strong coupling α S . I have worked on experimental data concerning the spectral functions from τ decays measured by ALEPH. I have discussed to some detail the perturbative QCD prediction obtained with two different methods: fixed-order perturbation theory (FOPT) and contour-improved perturbative theory (CIPT). The corresponding theoretical uncertainties have been studied at the τ and Z mass scales. The CIPT method

  3. EPEPT: A web service for enhanced P-value estimation in permutation tests

    Directory of Open Access Journals (Sweden)

    Knijnenburg Theo A

    2011-10-01

    Full Text Available Abstract Background In computational biology, permutation tests have become a widely used tool to assess the statistical significance of an event under investigation. However, the common way of computing the P-value, which expresses the statistical significance, requires a very large number of permutations when small (and thus interesting P-values are to be accurately estimated. This is computationally expensive and often infeasible. Recently, we proposed an alternative estimator, which requires far fewer permutations compared to the standard empirical approach while still reliably estimating small P-values 1. Results The proposed P-value estimator has been enriched with additional functionalities and is made available to the general community through a public website and web service, called EPEPT. This means that the EPEPT routines can be accessed not only via a website, but also programmatically using any programming language that can interact with the web. Examples of web service clients in multiple programming languages can be downloaded. Additionally, EPEPT accepts data of various common experiment types used in computational biology. For these experiment types EPEPT first computes the permutation values and then performs the P-value estimation. Finally, the source code of EPEPT can be downloaded. Conclusions Different types of users, such as biologists, bioinformaticians and software engineers, can use the method in an appropriate and simple way. Availability http://informatics.systemsbiology.net/EPEPT/

  4. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  5. An accurate definition of the status of inactive hepatitis B virus carrier by a combination of biomarkers (FibroTest-ActiTest and viral load.

    Directory of Open Access Journals (Sweden)

    Yen Ngo

    . CONCLUSION: In patients with chronic hepatitis B, a combination of FibroTest-ActiTest and viral load testing accurately defined the prognosis and the inactive carrier status.

  6. Improving global estimates of syphilis in pregnancy by diagnostic test type: A systematic review and meta-analysis.

    Science.gov (United States)

    Ham, D Cal; Lin, Carol; Newman, Lori; Wijesooriya, N Saman; Kamb, Mary

    2015-06-01

    "Probable active syphilis," is defined as seroreactivity in both non-treponemal and treponemal tests. A correction factor of 65%, namely the proportion of pregnant women reactive in one syphilis test type that were likely reactive in the second, was applied to reported syphilis seropositivity data reported to WHO for global estimates of syphilis during pregnancy. To identify more accurate correction factors based on test type reported. Medline search using: "Syphilis [Mesh] and Pregnancy [Mesh]," "Syphilis [Mesh] and Prenatal Diagnosis [Mesh]," and "Syphilis [Mesh] and Antenatal [Keyword]. Eligible studies must have reported results for pregnant or puerperal women for both non-treponemal and treponemal serology. We manually calculated the crude percent estimates of subjects with both reactive treponemal and reactive non-treponemal tests among subjects with reactive treponemal and among subjects with reactive non-treponemal tests. We summarized the percent estimates using random effects models. Countries reporting both reactive non-treponemal and reactive treponemal testing required no correction factor. Countries reporting non-treponemal testing or treponemal testing alone required a correction factor of 52.2% and 53.6%, respectively. Countries not reporting test type required a correction factor of 68.6%. Future estimates should adjust reported maternal syphilis seropositivity by test type to ensure accuracy. Published by Elsevier Ireland Ltd.

  7. Determining Optimal New Generation Satellite Derived Metrics for Accurate C3 and C4 Grass Species Aboveground Biomass Estimation in South Africa

    Directory of Open Access Journals (Sweden)

    Cletah Shoko

    2018-04-01

    Full Text Available While satellite data has proved to be a powerful tool in estimating C3 and C4 grass species Aboveground Biomass (AGB, finding an appropriate sensor that can accurately characterize the inherent variations remains a challenge. This limitation has hampered the remote sensing community from continuously and precisely monitoring their productivity. This study assessed the potential of a Sentinel 2 MultiSpectral Instrument, Landsat 8 Operational Land Imager, and WorldView-2 sensors, with improved earth imaging characteristics, in estimating C3 and C4 grasses AGB in the Cathedral Peak, South Africa. Overall, all sensors have shown considerable potential in estimating species AGB; with the use of different combinations of the derived spectral bands and vegetation indices producing better accuracies. However, WorldView-2 derived variables yielded better predictive accuracies (R2 ranging between 0.71 and 0.83; RMSEs between 6.92% and 9.84%, followed by Sentinel 2, with R2 between 0.60 and 0.79; and an RMSE 7.66% and 14.66%. Comparatively, Landsat 8 yielded weaker estimates, with R2 ranging between 0.52 and 0.71 and high RMSEs ranging between 9.07% and 19.88%. In addition, spectral bands located within the red edge (e.g., centered at 0.705 and 0.745 µm for Sentinel 2, SWIR, and NIR, as well as the derived indices, were found to be very important in predicting C3 and C4 AGB from the three sensors. The competence of these bands, especially of the free-available Landsat 8 and Sentinel 2 dataset, was also confirmed from the fusion of the datasets. Most importantly, the three sensors managed to capture and show the spatial variations in AGB for the target C3 and C4 grassland area. This work therefore provides a new horizon and a fundamental step towards C3 and C4 grass productivity monitoring for carbon accounting, forage mapping, and modelling the influence of environmental changes on their productivity.

  8. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    Science.gov (United States)

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  9. Tracer Testing for Estimating Heat Transfer Area in Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Pruess, Karsten; van Heel, Ton; Shan, Chao

    2004-05-12

    A key parameter governing the performance and life-time of a Hot Fractured Rock (HFR) reservoir is the effective heat transfer area between the fracture network and the matrix rock. We report on numerical modeling studies into the feasibility of using tracer tests for estimating heat transfer area. More specifically, we discuss simulation results of a new HFR characterization method which uses surface-sorbing tracers for which the adsorbed tracer mass is proportional to the fracture surface area per unit volume. Sorption in the rock matrix is treated with the conventional formulation in which tracer adsorption is volume-based. A slug of solute tracer migrating along a fracture is subject to diffusion across the fracture walls into the adjacent rock matrix. Such diffusion removes some of the tracer from the fluid in the fractures, reducing and retarding the peak in the breakthrough curve (BTC) of the tracer. After the slug has passed the concentration gradient reverses, causing back-diffusion from the rock matrix into the fracture, and giving rise to a long tail in the BTC of the solute. These effects become stronger for larger fracture-matrix interface area, potentially providing a means for estimating this area. Previous field tests and modeling studies have demonstrated characteristic tailing in BTCs for volatile tracers in vapor-dominated reservoirs. Simulated BTCs for solute tracers in single-phase liquid systems show much weaker tails, as would be expected because diffusivities are much smaller in the aqueous than in the gas phase, by a factor of order 1000. A much stronger signal of fracture-matrix interaction can be obtained when sorbing tracers are used. We have performed simulation studies of surface-sorbing tracers by implementing a model in which the adsorbed tracer mass is assumed proportional to the fracture-matrix surface area per unit volume. The results show that sorbing tracers generate stronger tails in BTCs, corresponding to an effective

  10. Accurate estimation of global and regional cardiac function by retrospectively gated multidetector row computed tomography. Comparison with cine magnetic resonance imaging

    International Nuclear Information System (INIS)

    Belge, Benedicte; Pasquet, Agnes; Vanoverschelde, Jean-Louis J.; Coche, Emmanuel; Gerber, Bernhard L.

    2006-01-01

    Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134±51 and 67±56 ml) were similar to those by MR (137±57 and 70±60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55±21 vs. 56±21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3±1.8 vs. 8.8±1.9 mm and 12.7±3.4 vs. 13.3±3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54±30 vs. 51±31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR. (orig.)

  11. Damping Estimation Using Free Decays and Ambient Vibration Tests

    DEFF Research Database (Denmark)

    Magalhães, Filipe; Brincker, Rune; Cunha, Álvaro

    2007-01-01

    The accurate identification of modal damping ratios of Civil Engineering structures is a subject of major importance, as the amplitude of structural vibrations in resonance is inversely proportional to these coefficients. Their experimental identification can be performed either from ambient vibr...

  12. A Small Area In-Situ MEMS Test Structure to Accurately Measure Fracture Strength by Electrostatic Probing

    Energy Technology Data Exchange (ETDEWEB)

    Bitsie, Fernando; Jensen, Brian D.; de Boer, Maarten

    1999-07-15

    We have designed, fabricated, tested and modeled a first generation small area test structure for MEMS fracture studies by electrostatic rather than mechanical probing. Because of its small area, this device has potential applications as a lot monitor of strength or fatigue of the MEMS structural material. By matching deflection versus applied voltage data to a 3-D model of the test structure, we develop high confidence that the local stresses achieved in the gage section are greater than 1 GPa. Brittle failure of the polycrystalline silicon was observed.

  13. Environmental risk assessment of selected organic chemicals based on TOC test and QSAR estimation models.

    Science.gov (United States)

    Chi, Yulang; Zhang, Huanteng; Huang, Qiansheng; Lin, Yi; Ye, Guozhu; Zhu, Huimin; Dong, Sijun

    2018-02-01

    Environmental risks of organic chemicals have been greatly determined by their persistence, bioaccumulation, and toxicity (PBT) and physicochemical properties. Major regulations in different countries and regions identify chemicals according to their bioconcentration factor (BCF) and octanol-water partition coefficient (Kow), which frequently displays a substantial correlation with the sediment sorption coefficient (Koc). Half-life or degradability is crucial for the persistence evaluation of chemicals. Quantitative structure activity relationship (QSAR) estimation models are indispensable for predicting environmental fate and health effects in the absence of field- or laboratory-based data. In this study, 39 chemicals of high concern were chosen for half-life testing based on total organic carbon (TOC) degradation, and two widely accepted and highly used QSAR estimation models (i.e., EPI Suite and PBT Profiler) were adopted for environmental risk evaluation. The experimental results and estimated data, as well as the two model-based results were compared, based on the water solubility, Kow, Koc, BCF and half-life. Environmental risk assessment of the selected compounds was achieved by combining experimental data and estimation models. It was concluded that both EPI Suite and PBT Profiler were fairly accurate in measuring the physicochemical properties and degradation half-lives for water, soil, and sediment. However, the half-lives between the experimental and the estimated results were still not absolutely consistent. This suggests deficiencies of the prediction models in some ways, and the necessity to combine the experimental data and predicted results for the evaluation of environmental fate and risks of pollutants. Copyright © 2016. Published by Elsevier B.V.

  14. Dredging Research, Volume 3, No. 3. DNA Technology to Impact Dredged Material Projects through Faster, More Accurate Testing Methods

    National Research Council Canada - National Science Library

    McDonald, Allison

    2000-01-01

    .... Most people associate DNA with criminal cases and paternity testing, but thanks to research projects such as the Human Genome Project, which has isolated and identified thousands of genes, many...

  15. Counting DNA: estimating the complexity of a test tube of DNA.

    Science.gov (United States)

    Faulhammer, D; Lipton, R J; Landweber, L F

    1999-10-01

    We consider the problem of estimation of the 'complexity' of a test tube of DNA. The complexity of a test tube is the number of different kinds of strands of DNA in the test tube. It is quite easy to estimate the number of total strands in a test tube, especially if the strands are all the same length. Estimation of the complexity is much less clear. We propose a simple kind of DNA computation that can estimate the complexity.

  16. Estimation of maximal oxygen uptake via submaximal exercise testing in sports, clinical, and home settings.

    Science.gov (United States)

    Sartor, Francesco; Vernillo, Gianluca; de Morree, Helma M; Bonomi, Alberto G; La Torre, Antonio; Kubis, Hans-Peter; Veicsteinas, Arsenio

    2013-09-01

    Assessment of the functional capacity of the cardiovascular system is essential in sports medicine. For athletes, the maximal oxygen uptake [Formula: see text] provides valuable information about their aerobic power. In the clinical setting, the (VO(2max)) provides important diagnostic and prognostic information in several clinical populations, such as patients with coronary artery disease or heart failure. Likewise, VO(2max) assessment can be very important to evaluate fitness in asymptomatic adults. Although direct determination of [VO(2max) is the most accurate method, it requires a maximal level of exertion, which brings a higher risk of adverse events in individuals with an intermediate to high risk of cardiovascular problems. Estimation of VO(2max) during submaximal exercise testing can offer a precious alternative. Over the past decades, many protocols have been developed for this purpose. The present review gives an overview of these submaximal protocols and aims to facilitate appropriate test selection in sports, clinical, and home settings. Several factors must be considered when selecting a protocol: (i) The population being tested and its specific needs in terms of safety, supervision, and accuracy and repeatability of the VO(2max) estimation. (ii) The parameters upon which the prediction is based (e.g. heart rate, power output, rating of perceived exertion [RPE]), as well as the need for additional clinically relevant parameters (e.g. blood pressure, ECG). (iii) The appropriate test modality that should meet the above-mentioned requirements should also be in line with the functional mobility of the target population, and depends on the available equipment. In the sports setting, high repeatability is crucial to track training-induced seasonal changes. In the clinical setting, special attention must be paid to the test modality, because multiple physiological parameters often need to be measured during test execution. When estimating VO(2max), one has

  17. Bayesian receiver operating characteristic estimation of multiple tests for diagnosis of bovine tuberculosis in Chadian cattle.

    Directory of Open Access Journals (Sweden)

    Borna Müller

    Full Text Available BACKGROUND: Bovine tuberculosis (BTB today primarily affects developing countries. In Africa, the disease is present essentially on the whole continent; however, little accurate information on its distribution and prevalence is available. Also, attempts to evaluate diagnostic tests for BTB in naturally infected cattle are scarce and mostly complicated by the absence of knowledge of the true disease status of the tested animals. However, diagnostic test evaluation in a given setting is a prerequisite for the implementation of local surveillance schemes and control measures. METHODOLOGY/PRINCIPAL FINDINGS: We subjected a slaughterhouse population of 954 Chadian cattle to single intra-dermal comparative cervical tuberculin (SICCT testing and two recently developed fluorescence polarization assays (FPA. Using a Bayesian modeling approach we computed the receiver operating characteristic (ROC curve of each diagnostic test, the true disease prevalence in the sampled population and the disease status of all sampled animals in the absence of knowledge of the true disease status of the sampled animals. In our Chadian setting, SICCT performed better if the cut-off for positive test interpretation was lowered from >4 mm (OIE standard cut-off to >2 mm. Using this cut-off, SICCT showed a sensitivity and specificity of 66% and 89%, respectively. Both FPA tests showed sensitivities below 50% but specificities above 90%. The true disease prevalence was estimated at 8%. Altogether, 11% of the sampled animals showed gross visible tuberculous lesions. However, modeling of the BTB disease status of the sampled animals indicated that 72% of the suspected tuberculosis lesions detected during standard meat inspections were due to other pathogens than Mycobacterium bovis. CONCLUSIONS/SIGNIFICANCE: Our results have important implications for BTB diagnosis in a high incidence sub-Saharan African setting and demonstrate the practicability of our Bayesian approach for

  18. Estimation of the Blood Pressure Response With Exercise Stress Testing.

    Science.gov (United States)

    Fitzgerald, Benjamin T; Ballard, Emma L; Scalia, Gregory M

    2018-04-20

    The blood pressure response to exercise has been described as a significant increase in systolic BP (sBP) with a smaller change in diastolic BP (dBP). This has been documented in small numbers, in healthy young men or in ethnic populations. This study examines these changes in low to intermediate risk of myocardial ischaemia in men and women over a wide age range. Consecutive patients having stress echocardiography were analysed. Ischaemic tests were excluded. Manual BP was estimated before and during standard Bruce protocol treadmill testing. Patient age, sex, body mass index (BMI), and resting and peak exercise BP were recorded. 3200 patients (mean age 58±12years) were included with 1123 (35%) females, and 2077 males, age range 18 to 93 years. Systolic BP increased from 125±17mmHg to 176±23mmHg. The change in sBP (ΔsBP) was 51mmHg (95% CI 51,52). The ΔdBP was 1mmHg (95% CI 1, 1), from 77 to 78mmHg, p<0.001). The upper limit of normal peak exercise sBP (determined by the 90th percentile) was 210mmHg in males and 200mmHg in females. The upper limit of normal ΔsBP was 80mmHg in males and 70mmHg in females. The lower limit of normal ΔsBP was 30mmHg in males and 20mmHg in females. In this large cohort, sBP increased significantly with exercise. Males had on average higher values than females. Similar changes were seen with the ΔsBP. The upper limit of normal for peak exercise sBP and ΔsBP are reported by age and gender. Copyright © 2018 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). All rights reserved.

  19. Estimation and testing in large binary contingency tables

    NARCIS (Netherlands)

    Kallenberg, W.C.M.

    1989-01-01

    Very sparse contingency tables with a multiplicative structure are studied. The number of unspecified parameters and the number of cells are growing with the number of observations. Consistency and asymptotic normality of natural estimators are established. Also uniform convergence of the estimators

  20. On the estimation and testing of predictive panel regressions

    NARCIS (Netherlands)

    Karabiyik, H.; Westerlund, Joakim; Narayan, Paresh

    2016-01-01

    Hjalmarsson (2010) considers an OLS-based estimator of predictive panel regressions that is argued to be mixed normal under very general conditions. In a recent paper, Westerlund et al. (2016) show that while consistent, the estimator is generally not mixed normal, which invalidates standard normal

  1. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    Science.gov (United States)

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. Copyright © 2016. Published by Elsevier Ltd.

  2. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  3. 34 CFR 462.41 - How must tests be administered in order to accurately measure educational gain?

    Science.gov (United States)

    2010-07-01

    ... measure educational gain? 462.41 Section 462.41 Education Regulations of the Offices of the Department of... EDUCATIONAL GAIN IN THE NATIONAL REPORTING SYSTEM FOR ADULT EDUCATION What Requirements Must States and Local Eligible Providers Follow When Measuring Educational Gain? § 462.41 How must tests be administered in order...

  4. Automotive advertising copy test. Final report. [Mileage estimates

    Energy Technology Data Exchange (ETDEWEB)

    1984-10-01

    The purpose of this research project was to explore the following issues: (1) mileage recall/recognition of miles per gallon/highway mileage estimates in print ads by advertisement readers; (2) determine consumer expectations and believability of advertised mileage guidelines; (3) measure recall/comprehension of mileage disclaimers; and (4) determine how consumers utilize published mileage estimates. The evidence from this study points to a public which is quite familiar with the EPA mileage estimates, in terms of using them as guidelines and in finding them to be helpful. Most adults also appear to be knowledgeable about factors which can affect car performance and, therefore, anticipate that, within certain tolerances, their actual mileage will differ from the EPA estimates. Although the consumer has been educated regarding fuel estimates, there is a very strong suggestion from this research that typical automobile print advertising does a less than an effective job in generating awareness of specific EPA estimates as well as their attendant disclaimer. Copy strategy and execution have a critical impact on recall of the EPA mileage estimates. 18 tables.

  5. Basics of Bayesian reliability estimation from attribute test data

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Waller, R.A.

    1975-10-01

    The basic notions of Bayesian reliability estimation from attribute lifetest data are presented in an introductory and expository manner. Both Bayesian point and interval estimates of the probability of surviving the lifetest, the reliability, are discussed. The necessary formulas are simply stated, and examples are given to illustrate their use. In particular, a binomial model in conjunction with a beta prior model is considered. Particular attention is given to the procedure for selecting an appropriate prior model in practice. Empirical Bayes point and interval estimates of reliability are discussed and examples are given. 7 figures, 2 tables

  6. Accurate clinical genetic testing for autoinflammatory diseases using the next-generation sequencing platform MiSeq

    Directory of Open Access Journals (Sweden)

    Manabu Nakayama

    2017-03-01

    Full Text Available Autoinflammatory diseases occupy one of a group of primary immunodeficiency diseases that are generally thought to be caused by mutation of genes responsible for innate immunity, rather than by acquired immunity. Mutations related to autoinflammatory diseases occur in 12 genes. For example, low-level somatic mosaic NLRP3 mutations underlie chronic infantile neurologic, cutaneous, articular syndrome (CINCA, also known as neonatal-onset multisystem inflammatory disease (NOMID. In current clinical practice, clinical genetic testing plays an important role in providing patients with quick, definite diagnoses. To increase the availability of such testing, low-cost high-throughput gene-analysis systems are required, ones that not only have the sensitivity to detect even low-level somatic mosaic mutations, but also can operate simply in a clinical setting. To this end, we developed a simple method that employs two-step tailed PCR and an NGS system, MiSeq platform, to detect mutations in all coding exons of the 12 genes responsible for autoinflammatory diseases. Using this amplicon sequencing system, we amplified a total of 234 amplicons derived from the 12 genes with multiplex PCR. This was done simultaneously and in one test tube. Each sample was distinguished by an index sequence of second PCR primers following PCR amplification. With our procedure and tips for reducing PCR amplification bias, we were able to analyze 12 genes from 25 clinical samples in one MiSeq run. Moreover, with the certified primers designed by our short program—which detects and avoids common SNPs in gene-specific PCR primers—we used this system for routine genetic testing. Our optimized procedure uses a simple protocol, which can easily be followed by virtually any office medical staff. Because of the small PCR amplification bias, we can analyze simultaneously several clinical DNA samples with low cost and can obtain sufficient read numbers to detect a low level of

  7. Accurate clinical genetic testing for autoinflammatory diseases using the next-generation sequencing platform MiSeq.

    Science.gov (United States)

    Nakayama, Manabu; Oda, Hirotsugu; Nakagawa, Kenji; Yasumi, Takahiro; Kawai, Tomoki; Izawa, Kazushi; Nishikomori, Ryuta; Heike, Toshio; Ohara, Osamu

    2017-03-01

    Autoinflammatory diseases occupy one of a group of primary immunodeficiency diseases that are generally thought to be caused by mutation of genes responsible for innate immunity, rather than by acquired immunity. Mutations related to autoinflammatory diseases occur in 12 genes. For example, low-level somatic mosaic NLRP3 mutations underlie chronic infantile neurologic, cutaneous, articular syndrome (CINCA), also known as neonatal-onset multisystem inflammatory disease (NOMID). In current clinical practice, clinical genetic testing plays an important role in providing patients with quick, definite diagnoses. To increase the availability of such testing, low-cost high-throughput gene-analysis systems are required, ones that not only have the sensitivity to detect even low-level somatic mosaic mutations, but also can operate simply in a clinical setting. To this end, we developed a simple method that employs two-step tailed PCR and an NGS system, MiSeq platform, to detect mutations in all coding exons of the 12 genes responsible for autoinflammatory diseases. Using this amplicon sequencing system, we amplified a total of 234 amplicons derived from the 12 genes with multiplex PCR. This was done simultaneously and in one test tube. Each sample was distinguished by an index sequence of second PCR primers following PCR amplification. With our procedure and tips for reducing PCR amplification bias, we were able to analyze 12 genes from 25 clinical samples in one MiSeq run. Moreover, with the certified primers designed by our short program-which detects and avoids common SNPs in gene-specific PCR primers-we used this system for routine genetic testing. Our optimized procedure uses a simple protocol, which can easily be followed by virtually any office medical staff. Because of the small PCR amplification bias, we can analyze simultaneously several clinical DNA samples with low cost and can obtain sufficient read numbers to detect a low level of somatic mosaic mutations.

  8. Estimation of aerobic fitness among young men without exercise test

    Directory of Open Access Journals (Sweden)

    Tanskanen Minna M.

    2015-08-01

    Full Text Available Study aim: to develop and estimate the validity of non-exercise methods to predict VO2max among young male conscripts entering military service in order to divide them into the different physical training groups.

  9. VHTRC experiment for verification test of H∞ reactivity estimation method

    International Nuclear Information System (INIS)

    Fujii, Yoshio; Suzuki, Katsuo; Akino, Fujiyoshi; Yamane, Tsuyoshi; Fujisaki, Shingo; Takeuchi, Motoyoshi; Ono, Toshihiko

    1996-02-01

    This experiment was performed at VHTRC to acquire the data for verifying the H∞ reactivity estimation method. In this report, the experimental method, the measuring circuits and data processing softwares are described in details. (author)

  10. Power plant cost estimates put to the test

    International Nuclear Information System (INIS)

    Crowley, J.H.

    1978-01-01

    The growth in standards for nuclear applications and the impact of these codes and standards on the cost of nuclear power plants is described. The preparation of cost estimates and reasons for apparent discrepancies are discussed. Consistent estimates of nuclear power plant costs have been prepared in the USA for over a decade. They show that the difference in capital costs between nuclear and coal fired plants is narrowing and that when total generating costs are calculated nuclear power is substantially cheaper. (UK)

  11. Estimation of 305 Day Milk Yield from Cumulative Monthly and Bimonthly Test Day Records in Indonesian Holstein Cattle

    Science.gov (United States)

    Rahayu, A. P.; Hartatik, T.; Purnomoadi, A.; Kurnianto, E.

    2018-02-01

    The aims of this study were to estimate 305 day first lactation milk yield of Indonesian Holstein cattle from cumulative monthly and bimonthly test day records and to analyze its accuracy.The first lactation records of 258 dairy cows from 2006 to 2014 consisted of 2571 monthly (MTDY) and 1281 bimonthly test day yield (BTDY) records were used. Milk yields were estimated by regression method. Correlation coefficients between actual and estimated milk yield by cumulative MTDY were 0.70, 0.78, 0.83, 0.86, 0.89, 0.92, 0.94 and 0.96 for 2-9 months, respectively, meanwhile by cumulative BTDY were 0.69, 0.81, 0.87 and 0.92 for 2, 4, 6 and 8 months, respectively. The accuracy of fitting regression models (R2) increased with the increasing in the number of cumulative test day used. The used of 5 cumulative MTDY was considered sufficient for estimating 305 day first lactation milk yield with 80.6% accuracy and 7% error percentage of estimation. The estimated milk yield from MTDY was more accurate than BTDY by 1.1 to 2% less error percentage in the same time.

  12. The Validity of Value-Added Estimates from Low-Stakes Testing Contexts: The Impact of Change in Test-Taking Motivation and Test Consequences

    Science.gov (United States)

    Finney, Sara J.; Sundre, Donna L.; Swain, Matthew S.; Williams, Laura M.

    2016-01-01

    Accountability mandates often prompt assessment of student learning gains (e.g., value-added estimates) via achievement tests. The validity of these estimates have been questioned when performance on tests is low stakes for students. To assess the effects of motivation on value-added estimates, we assigned students to one of three test consequence…

  13. Accurate measurement of the anomalous magnetic moments of the electron and the muon as a special relativity theory test

    International Nuclear Information System (INIS)

    Jurco, B.; Tolar, J.

    1983-01-01

    The exact experimental measurement of the gyromagnetic factor of the electron and the muon also represent an exact test of the validity of the special relativity theory. The gyromagnetic factor may be measured in two ways: in the magnetic field the resonance frequency is measured for transitions between the Rabi-Landau levels with the opposite spin orientation or precession is observed of the spin of a lepton flying in the magnetic field. The latter method is theoretically analyzed in great detail and described by equations. The measured values are given according to foreign experiments with an accuracy of 1 per mille. (M.D.)

  14. Accurate measurement of the anomalous magnetic moments of the electron and the muon as a special relativity theory test

    Energy Technology Data Exchange (ETDEWEB)

    Jurco, B.; Tolar, J. (Ceske Vysoke Uceni Technicke, Prague (Czechoslovakia). Fakulta Jaderna a Fysikalne Inzenyrska)

    1983-04-01

    The exact experimental measurement of the gyromagnetic factor of the electron and the muon also represent an exact test of the validity of the special relativity theory. The gyromagnetic factor may be measured in two ways: in the magnetic field the resonance frequency is measured for transitions between the Rabi-Landau levels with the opposite spin orientation or precession is observed of the spin of a lepton flying in the magnetic field. The latter method is theoretically analyzed in great detail and described by equations. The measured values are given according to foreign experiments with an accuracy of 1 per mille.

  15. Estimation of Transport Parameters Using Forced Gradient Tracer Tests in Heterogeneous Aquifers

    National Research Council Canada - National Science Library

    Illangasekare, Tissa

    2003-01-01

    .... The focus was on both reactive and sorptive parameters. The experimental component of the study was conducted in a three-dimensional, intermediate-scale test tank to obtain accurate data on the behavior of nonreactive and sorptive tracers...

  16. Around Semipalatinsk nuclear test site: Progress of dose estimations relevant to the consequences of nuclear tests

    International Nuclear Information System (INIS)

    Stepanenko, Valeriy F.; Hoshi, Masaharu; Bailiff, Ian K.

    2006-01-01

    The paper is an analytical overview of the main results presented at the 3rd Dosimetry Workshop in Hiroshima (9-11 of March 2005), where different aspects of the dose reconstruction around the Semipalatinsk nuclear test site (SNTS) were discussed and summarized. The results of the international intercomparison of the retrospective luminescence dosimetry (RLD) method for Dolon' village (Kazakhstan) were presented at the Workshop and good concurrence between dose estimations by different laboratories from 6 countries (Japan, Russia, USA, Germany, Finland and UK) was pointed out. The accumulated dose values in brick for a common depth of 10 mm depth of 10 mm depth obtained independently by all participating laboratories were in good agreement for all four brick samples from Dolon' village, Kazakhstan, with the average value of the local gamma dose due to fallout (near the sampling locations) being about 220 mGy (background dose has been subtracted). Furthermore, using a conversion factor of about 2 to obtain the free-in-air dose, a value of local dose ∼440 mGy is obtained, which supports the results of external dose calculations for Dolon': recently published soil contamination data, archive information and new models were used for refining dose calculations and the external dose in air for Dolon village was estimated to be about 500 mGy. The results of electron spin resonance (ESR) dosimetry with tooth enamel have demonstrated the notable progress in application of ESR dosimetry to the problems of dose reconstruction around the Semipalatinsk nuclear test site. At the present moment, dose estimates by the ESR method have become more consistent with calculated values and with retrospective luminescence dosimetry data, but differences between ESR dose estimates and RLD/calculation data were noted. For example mean ESR dose for eligible tooth samples from Dolon' village was estimated to be about 140 mGy (above background dose), which is less than dose values obtained

  17. Parameter Estimation And Hypothesis Testing In A Two Epoch Dam ...

    African Journals Online (AJOL)

    Also computed along with the least square solution and statistical testing were the minimum detectable Bias (MDB) and the Bias to Noise Ratio (BNR). All tests and adjustments were carried out using MOVE 3 software along with the LEICA SKI Pro 2.1. From the results of the tests, only observation to Rover station RF 8 ...

  18. Charging and discharging tests for obtaining an accurate dynamic electro-thermal model of high power lithium-ion pack system for hybrid and EV applications

    DEFF Research Database (Denmark)

    Mihet-Popa, Lucian; Camacho, Oscar Mauricio Forero; Nørgård, Per Bromand

    2013-01-01

    This paper presents a battery test platform including two Li-ion battery designed for hybrid and EV applications, and charging/discharging tests under different operating conditions carried out for developing an accurate dynamic electro-thermal model of a high power Li-ion battery pack system....... The aim of the tests has been to study the impact of the battery degradation and to find out the dynamic characteristics of the cells including nonlinear open circuit voltage, series resistance and parallel transient circuit at different charge/discharge currents and cell temperature. An equivalent...... circuit model, based on the runtime battery model and the Thevenin circuit model, with parameters obtained from the tests and depending on SOC, current and temperature has been implemented in MATLAB/Simulink and Power Factory. A good alignment between simulations and measurements has been found....

  19. Correction of nutrition test errors for more accurate quantification of the link between dental health and malnutrition.

    Science.gov (United States)

    Dion, Nathalie; Cotart, Jean-Louis; Rabilloud, Muriel

    2007-04-01

    We quantified the link between tooth deterioration and malnutrition in institutionalized elderly subjects, taking into account the major risk factors for malnutrition and adjusting for the measurement error made in using the Mini Nutritional Assessment questionnaire. Data stem from a survey conducted in 2005 in 1094 subjects >or=60 y of age from a large sample of 100 institutions of the Rhône-Alpes region of France. A Bayesian approach was used to quantify the effect of tooth deterioration on malnutrition through a two-level logistic regression. This approach allowed taking into account the uncertainty on sensitivity and specificity of the Mini Nutritional Assessment questionnaire to adjust for the measurement error of that test. After adjustment for other risk factors, the risk of malnutrition increased significantly and continuously 1.15 times (odds ratio 1.15, 95% credibility interval 1.06-1.25) whenever the masticatory percentage decreased by 10 points, which is equivalent to the loss of two molars. The strongest factors that augmented the probability of malnutrition were deglutition disorders, depression, and verbal inconsistency. Dependency was also an important factor; the odds of malnutrition nearly doubled for each additional grade of dependency (graded 6 to 1). Diabetes, central neurodegenerative disease, and carcinoma tended to increase the probability of malnutrition but their effect was not statistically significant. Dental status should be considered a serious risk factor for malnutrition. Regular dental examination and care should preserve functional dental integrity to prevent malnutrition in institutionalized elderly people.

  20. Testing the assumption in ergonomics software that overall shoulder strength can be accurately calculated by treating orthopedic axes as independent.

    Science.gov (United States)

    Hodder, Joanne N; La Delfa, Nicholas J; Potvin, Jim R

    2016-08-01

    To predict shoulder strength, most current ergonomics software assume independence of the strengths about each of the orthopedic axes. Using this independent axis approach (IAA), the shoulder can be predicted to have strengths as high as the resultant of the maximum moment about any two or three axes. We propose that shoulder strength is not independent between axes, and propose an approach that calculates the weighted average (WAA) between the strengths of the axes involved in the demand. Fifteen female participants performed maximum isometric shoulder exertions with their right arm placed in a rigid adjustable brace affixed to a tri-axial load cell. Maximum exertions were performed in 24 directions, including four primary directions, horizontal flexion-extension, abduction-adduction, and at 15° increments in between those axes. Moments were computed and comparisons made between the experimentally collected strengths and those predicted by the IAA and WAA methods. The IAA over-predicted strength in 14 of 20 non-primary exertions directions, while the WAA underpredicted strength in only 2 of these directions. Therefore, it is not valid to assume that shoulder axes are independent when predicting shoulder strengths between two orthopedic axes, and the WAA is an improvement over current methods for the posture tested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Estimation of fatigue characteristics of asphaltic mixes using simple tests

    NARCIS (Netherlands)

    Medani, T.O.; Molenaar, A.A.A.

    2000-01-01

    A simplified procedure for estimation of fatigue characteristics of asphaltic mixes is presented. The procedure requires the determination of the so-called master curve (Le. the relationship between the mix stiffness, the loading time and the temperature), the asphalt properties and the mix

  2. Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation

    Science.gov (United States)

    Ross, Steven J.; Mackey, Beth

    2015-01-01

    This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…

  3. Devices used by automated milking systems are similarly accurate in estimating milk yield and in collecting a representative milk sample compared with devices used by farms with conventional milk recording

    NARCIS (Netherlands)

    Kamphuis, Claudia; Dela Rue, B.; Turner, S.A.; Petch, S.

    2015-01-01

    Information on accuracy of milk-sampling devices used on farms with automated milking systems (AMS) is essential for development of milk recording protocols. The hypotheses of this study were (1) devices used by AMS units are similarly accurate in estimating milk yield and in collecting

  4. AX Tank farm closure settlement estimates and soil testing; TOPICAL

    International Nuclear Information System (INIS)

    BECKER, D.L.

    1999-01-01

    This study provides a conservative three-dimensional settlement study of the AX Tank Farm closure with fill materials and a surface barrier. The finite element settlement model constructed included the interaction of four tanks and the surface barrier with the site soil and bedrock. Also addressed are current soil testing techniques suitable for the site soil with recommendations applicable to the AX Tank Farm and the planned cone penetration testing

  5. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    Science.gov (United States)

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  6. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik

    1995-01-01

    Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...

  7. Hydrometer test for estimation of immunoglobulin concentration in bovine colostrum.

    Science.gov (United States)

    Fleenor, W A; Stott, G H

    1980-06-01

    A practical field method for measuring immunoglobulin concentration in bovine colostrum has been developed from the linear relationship between colostral specific gravity and immunoglobulin concentration. Fourteen colostrums were collected within 24 h postpartum from nursed and unnursed cows and were assayed for specific gravity and major colostral constituents. Additionally, 15 colostrums were collected immediately postpartum prior to suckling and assayed for specific gravity and immunoglobulin concentration. Regression analysis provided an equation to estimate colostral immunoglobulin concentration from the specific gravity of fresh whole colostrum. From this, a colostrometer was developed for practical field use.

  8. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  9. Estimation of the common cause failure probabilities on the component group with mixed testing scheme

    International Nuclear Information System (INIS)

    Hwang, Meejeong; Kang, Dae Il

    2011-01-01

    Highlights: ► This paper presents a method to estimate the common cause failure probabilities on the common cause component group with mixed testing schemes. ► The CCF probabilities are dependent on the testing schemes such as staggered testing or non-staggered testing. ► There are many CCCGs with specific mixed testing schemes in real plant operation. ► Therefore, a general formula which is applicable to both alternate periodic testing scheme and train level mixed testing scheme was derived. - Abstract: This paper presents a method to estimate the common cause failure (CCF) probabilities on the common cause component group (CCCG) with mixed testing schemes such as the train level mixed testing scheme or the alternate periodic testing scheme. In the train level mixed testing scheme, the components are tested in a non-staggered way within the same train, but the components are tested in a staggered way between the trains. The alternate periodic testing scheme indicates that all components in the same CCCG are tested in a non-staggered way during the planned maintenance period, but they are tested in a staggered way during normal plant operation. Since the CCF probabilities are dependent on the testing schemes such as staggered testing or non-staggered testing, CCF estimators have two kinds of formulas in accordance with the testing schemes. Thus, there are general formulas to estimate the CCF probability on the staggered testing scheme and non-staggered testing scheme. However, in real plant operation, there are many CCCGs with specific mixed testing schemes. Recently, Barros () and Kang () proposed a CCF factor estimation method to reflect the alternate periodic testing scheme and the train level mixed testing scheme. In this paper, a general formula which is applicable to both the alternate periodic testing scheme and the train level mixed testing scheme was derived.

  10. External exposure estimates for individuals near the Nevada Test Site

    International Nuclear Information System (INIS)

    Henderson, R.W.; Smale, R.F.

    1987-01-01

    Individuals living near the Nevada Test Site were exposed to both beta and gamma radiations from fission products and activation products resulting from the atmospheric testing of nuclear devices. These exposures were functions of the amount of material deposited, the time of arrival of the debris, and the amount of shielding afforded by structures. Results are presented for each of nine generic life styles. These are representative of the living patterns of the people residing in the area. For each event at each location for which data exist, a representative of each life style was closely followed for a period of thirty days. The results of these detailed calculations are then extrapolated to the present. 7 refs., 5 figs., 2 tabs

  11. Estimation of γ irradiation induced genetic damage by Ames test

    International Nuclear Information System (INIS)

    Hosoda, Eiko

    1999-01-01

    Mutation by 60 Co γ irradiation was studied in five different histidine-requiring auxotrophs of Salmonella typhimurium. The strains TA98 (sensitive to frameshift) and TA100 (sensitive to base-pair substitution) were irradiated (10-84 Gy and 45-317 Gy, respectively) and revertants were counted. TA98 exhibited radiation-induced revertants, 2.8 fold of spontaneous revertants, although no significant increase was detected in TA100. Then, three other frameshift-sensitive strains TA1537, TA1538 and TA94 were irradiated in a dose of 61-167 Gy. Only in TA94, revertants increased 3.5 fold. Since spontaneous revertants are known to be independent of cell density, a decrease of bacterial number by γ irradiation was confirmed not to affect the induced revertants by dilution test. Thus the standard Ames Salmonella assay identified γ irradiation was confirmed not to affect the induced revertants by dilution test. Thus the standard Ames Salmonella assay identified γ irradiation as a mutagenetic agent. The mutagenicity of dinitropyrene, a mutagen widely existing in food, and dismutagenicity of boiling water insoluble fraction of Hizikia fusiforme, edible marine alga, were tested on γ induced revertant formation in TA98 and TA94. Dinitropyrene synergistically increased γ induced revertants and Hizikia insoluble fraction reduced the synergistic effect of dinitropyrene dependently on the concentration. (author)

  12. Estimation and inference in the same-different test

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Brockhoff, Per B.

    2009-01-01

    as well as similarity. We show that the likelihood root statistic is equivalent to the well known G(2) likelihood ratio statistic for tests of no difference. As an additional practical tool, we introduce the profile likelihood curve to provide a convenient graphical summary of the information in the data......Inference for the Thurstonian delta in the same-different protocol via the well known Wald statistic is shown to be inappropriate in a wide range of situations. We introduce the likelihood root statistic as an alternative to the Wald statistic to produce CIs and p-values for assessing difference...

  13. On the estimation of durability during thermal fatigue tests

    International Nuclear Information System (INIS)

    Vashunin, A.I.; Kotov, P.I.

    1981-01-01

    It is shown that during thermal fatigue tests under conditions of varying loading rigidity the value of stored one-sided deformation in a fracture zone tends to the limit value of material ductility. Holding at Tsub(max) is semicycle of compression increases irreversible deformation on value of Atausub(confer)sup(a), which does not depend on loading rigidity. It is established that the Use of curves of thermal fatigue as basic ones for determination of resistance of non-isothermal low-cycle fatigue is possible only at values of stored quasistatical damage, constituting less than 5% from available ductility [ru

  14. Tritium system test assembly control system cost estimate

    International Nuclear Information System (INIS)

    Stutz, R.A.

    1979-01-01

    The principal objectives of the Tritium Systems Test Assembly (TSTA), which includes the development, demonstration and interfacing of technologies related to the deuterium--tritium fuel cycle for fusion reactor systems, are concisely stated. The various integrated subsystems comprising TSTA and their functions are discussed. Each of the four major subdivisions of TSTA, including the main process system, the environmental and safety systems, supporting systems and the physical plant are briefly discussed. An overview of the Master Data Acquisition and Control System, which will control all functional operation of TSTA, is provided

  15. Building, testing and validating a set of home-made von Frey filaments: a precise, accurate and cost effective alternative for nociception assessment.

    Science.gov (United States)

    de Sousa, Marcelo Victor Pires; Ferraresi, Cleber; de Magalhães, Ana Carolina; Yoshimura, Elisabeth Mateus; Hamblin, Michael R

    2014-07-30

    A von Frey filament (vFF) is a type of aesthesiometer usually made of nylon perpendicularly held in a base. It can be used in paw withdrawal pain threshold assessment, one of the most popular tests for pain evaluation using animal models. For this test, a set of filaments, each able to exert a different force, is applied to the animal paw, from the weakest to the strongest, until the paw is withdrawn. We made 20 low cost vFF using nylon filaments of different lengths and constant diameter glued perpendicularly to the ends of popsicle sticks. They were calibrated using a laboratory balance scale. Building and calibrating took around 4h and confirmed the theoretical prediction that the force exerted is inversely proportional to the length and directly proportional to the width of the filament. The calibration showed that they were precise and accurate. We analyzed the paw withdrawal threshold assessed with the set of home-made vFF and with a high quality commercial set of 5 monofilaments vFF (Stoelting, Wood Dale, USA) in two groups (n=5) of healthy mice. The home-made vFF precisely and accurately measured the hind paw withdrawal threshold (20.3±0.9 g). The commercial vFF have different diameters while our set has the same diameter avoiding the problem of lower sensitivity to larger diameter filaments. Building a set of vFF is easy, cost effective, and depending on the kind of tests, can increase precision and accuracy of animal nociception evaluation. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test.

    Science.gov (United States)

    Stuiver, Martijn M; Kampshoff, Caroline S; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J M; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M

    2017-11-01

    To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo 2peak ) and peak power output (W peak ). Cross-sectional study. Multicenter. Cancer survivors (N=283) in 2 randomized controlled exercise trials. Not applicable. Prediction model accuracy was assessed by intraclass correlation coefficients (ICCs) and limits of agreement (LOA). Multiple linear regression was used for model extension. Clinical performance was judged by the percentage of accurate endurance exercise prescriptions. ICCs of SRT-predicted Vo 2peak and W peak with these values as obtained by the cardiopulmonary exercise test were .61 and .73, respectively, using the previously published prediction models. 95% LOA were ±705mL/min with a bias of 190mL/min for Vo 2peak and ±59W with a bias of 5W for W peak . Modest improvements were obtained by adding body weight and sex to the regression equation for the prediction of Vo 2peak (ICC, .73; 95% LOA, ±608mL/min) and by adding age, height, and sex for the prediction of W peak (ICC, .81; 95% LOA, ±48W). Accuracy of endurance exercise prescription improved from 57% accurate prescriptions to 68% accurate prescriptions with the new prediction model for W peak . Predictions of Vo 2peak and W peak based on the SRT are adequate at the group level, but insufficiently accurate in individual patients. The multivariable prediction model for W peak can be used cautiously (eg, supplemented with a Borg score) to aid endurance exercise prescription. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. Lateral Flow Rapid Test for Accurate and Early Diagnosis of Scrub Typhus: A Febrile Illness of Historically Military Importance in the Pacific Rim.

    Science.gov (United States)

    Chao, Chien-Chung; Zhangm, Zhiwen; Weissenberger, Giulia; Chen, Hua-Wei; Ching, Wei-Mei

    2017-03-01

    Scrub typhus (ST) is an infection caused by Orientia tsutsugamushi. Historically, ST was ranked as the second most important arthropod-borne medical problem only behind malaria during World War II and the Vietnam War. The disease occurs mainly in Southeast Asia and has been shown to emerge and reemerge in new areas, implying the increased risk for U.S. military and civilian personnel deployed to these regions. ST can effectively be treated by doxycycline provided the diagnosis is made early, before the development of severe complications. Scrub Typhus Detect is a lateral flow rapid test based on a mixture of recombinant 56-kDa antigens with broad reactivity. The performance of this prototype product was evaluated against indirect immunofluorescence assay, the serological gold standard. Using 249 prospectively collected samples from Thailand, the sensitivity and specificity for IgM was found to be 100% and 92%, respectively, suggesting a high potential of this product for clinical use. This product will provide a user friendly, rapid, and accurate diagnosis of ST for clinicians to provide timely and accurate treatments of deployed personnel. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  18. Fast and accurate methods for the performance testing of highly-efficient c-Si photovoltaic modules using a 10 ms single-pulse solar simulator and customized voltage profiles

    International Nuclear Information System (INIS)

    Virtuani, A; Rigamonti, G; Friesen, G; Chianese, D; Beljean, P

    2012-01-01

    Performance testing of highly efficient, highly capacitive c-Si modules with pulsed solar simulators requires particular care. These devices in fact usually require a steady-state solar simulator or pulse durations longer than 100–200 ms in order to avoid measurement artifacts. The aim of this work was to validate an alternative method for the testing of highly capacitive c-Si modules using a 10 ms single pulse solar simulator. Our approach attempts to reconstruct a quasi-steady-state I–V (current–voltage) curve of a highly capacitive device during one single 10 ms flash by applying customized voltage profiles–-in place of a conventional V ramp—to the terminals of the device under test. The most promising results were obtained by using V profiles which we name ‘dragon-back’ (DB) profiles. When compared to the reference I–V measurement (obtained by using a multi-flash approach with approximately 20 flashes), the DB V profile method provides excellent results with differences in the estimation of P max (as well as of I sc , V oc and FF) below ±0.5%. For the testing of highly capacitive devices the method is accurate, fast (two flashes—possibly one—required), cost-effective and has proven its validity with several technologies making it particularly interesting for in-line testing. (paper)

  19. Cochlear function tests in estimation of speech dynamic range.

    Science.gov (United States)

    Han, Jung Ju; Park, So Young; Park, Shi Nae; Na, Mi Sun; Lee, Philip; Han, Jae Sang

    2016-10-01

    The loss of active cochlear mechanics causes elevated thresholds, loudness recruitment, and reduced frequency selectivity. The problems faced by hearing-impaired listeners are largely related with reduced dynamic range (DR). The aim of this study was to determine which index of the cochlear function tests correlates best with the DR to speech stimuli. Audiological data on 516 ears with pure tone average (PTA) of ≤55 dB and word recognition score of ≥70% were analyzed. PTA, speech recognition threshold (SRT), uncomfortable loudness (UCL), and distortion product otoacoustic emission (DPOAE) were explored as the indices of cochlear function. Audiometric configurations were classified. Correlation between each index and the DR was assessed and multiple regression analysis was done. PTA and SRT demonstrated strong negative correlations with the DR (r = -0.788 and -0.860, respectively), while DPOAE sum was moderately correlated (r = 0.587). UCLs remained quite constant for the total range of the DR. The regression equation was Y (DR) = 75.238 - 0.719 × SRT (R(2 )=( )0.721, p equation.

  20. Asphere cross testing: an exercise in uncertainty estimation

    Science.gov (United States)

    Murphy, Paul E.

    2017-10-01

    Aspheric surfaces can provide substantial improvements to optical designs, but they can also be difficult to manufacture cost-effectively. Asphere metrology contributes significantly to this difficulty, especially for high-precision aspheric surfaces. With the advent of computer-controlled fabrication machinery, optical surface quality is chiefly limited by the ability to measure it. Consequently, understanding the uncertainty of surface measurements is of great importance for determining what optical surface quality can be achieved. We measured sample aspheres using multiple techniques: profilometry, null interferometry, and subaperture stitching. We also obtained repeatability and reproducibility (R&R) measurement data by retesting the same aspheres under various conditions. We highlight some of the details associated with the different measurement techniques, especially efforts to reduce bias in the null tests via calibration. We compare and contrast the measurement results, and obtain an empirical view of the measurement uncertainty of the different techniques. We found fair agreement in overall surface form among the methods, but meaningful differences in reproducibility and mid-spatial frequency performance.

  1. Testing an Alternative Method for Estimating the Length of Fungal Hyphae Using Photomicrography and Image Processing.

    Science.gov (United States)

    Shen, Qinhua; Kirschbaum, Miko U F; Hedley, Mike J; Camps Arbestain, Marta

    2016-01-01

    This study aimed to develop and test an unbiased and rapid methodology to estimate the length of external arbuscular mycorrhizal fungal (AMF) hyphae in soil. The traditional visual gridline intersection (VGI) method, which consists in a direct visual examination of the intersections of hyphae with gridlines on a microscope eyepiece after aqueous extraction, membrane-filtration, and staining (e.g., with trypan blue), was refined. For this, (i) images of the stained hyphae were taken by using a digital photomicrography technique to avoid the use of the microscope and the method was referred to as "digital gridline intersection" (DGI) method; and (ii), the images taken in (i) were processed and the hyphal length was measured by using ImageJ software, referred to as the "photomicrography-ImageJ processing" (PIP) method. The DGI and PIP methods were tested using known grade lengths of possum fur. Then they were applied to measure the hyphal lengths in soils with contrasting phosphorus (P) fertility status. Linear regressions were obtained between the known lengths (Lknown) of possum fur and the values determined by using either the DGI (LDGI) (LDGI = 0.37 + 0.97 × Lknown, r2 = 0.86) or PIP (LPIP) methods (LPIP = 0.33 + 1.01 × Lknown, r2 = 0.98). There were no significant (P > 0.05) differences between the LDGI and LPIP values. While both methods provided accurate estimation (slope of regression being 1.0), the PIP method was more precise, as reflected by a higher value of r2 and lower coefficients of variation. The average hyphal lengths (6.5-19.4 m g-1) obtained by the use of these methods were in the range of those typically reported in the literature (3-30 m g-1). Roots growing in P-deficient soil developed 2.5 times as many hyphae as roots growing in P-rich soil (17.4 vs 7.2 m g-1). These tests confirmed that the use of digital photomicrography in conjunction with either the grid-line intersection principle or image processing is a suitable method for the

  2. Publication Bias Currently Makes an Accurate Estimate of the Benefits of Enrichment Programs Difficult: A Postmortem of Two Meta-Analyses Using Statistical Power Analysis

    Science.gov (United States)

    Warne, Russell T.

    2016-01-01

    Recently Kim (2016) published a meta-analysis on the effects of enrichment programs for gifted students. She found that these programs produced substantial effects for academic achievement (g = 0.96) and socioemotional outcomes (g = 0.55). However, given current theory and empirical research these estimates of the benefits of enrichment programs…

  3. A lake classification concept for a more accurate global estimate of the dissolved inorganic carbon export from terrestrial ecosystems to inland waters

    Science.gov (United States)

    Engel, Fabian; Farrell, Kaitlin J.; McCullough, Ian M.; Scordo, Facundo; Denfeld, Blaize A.; Dugan, Hilary A.; de Eyto, Elvira; Hanson, Paul C.; McClure, Ryan P.; Nõges, Peeter; Nõges, Tiina; Ryder, Elizabeth; Weathers, Kathleen C.; Weyhenmeyer, Gesa A.

    2018-04-01

    The magnitude of lateral dissolved inorganic carbon (DIC) export from terrestrial ecosystems to inland waters strongly influences the estimate of the global terrestrial carbon dioxide (CO2) sink. At present, no reliable number of this export is available, and the few studies estimating the lateral DIC export assume that all lakes on Earth function similarly. However, lakes can function along a continuum from passive carbon transporters (passive open channels) to highly active carbon transformers with efficient in-lake CO2 production and loss. We developed and applied a conceptual model to demonstrate how the assumed function of lakes in carbon cycling can affect calculations of the global lateral DIC export from terrestrial ecosystems to inland waters. Using global data on in-lake CO2 production by mineralization as well as CO2 loss by emission, primary production, and carbonate precipitation in lakes, we estimated that the global lateral DIC export can lie within the range of {0.70}_{-0.31}^{+0.27} to {1.52}_{-0.90}^{+1.09} Pg C yr-1 depending on the assumed function of lakes. Thus, the considered lake function has a large effect on the calculated lateral DIC export from terrestrial ecosystems to inland waters. We conclude that more robust estimates of CO2 sinks and sources will require the classification of lakes into their predominant function. This functional lake classification concept becomes particularly important for the estimation of future CO2 sinks and sources, since in-lake carbon transformation is predicted to be altered with climate change.

  4. PCR testing can be as accurate as culture for diagnosis of Ichthyophonus hoferi in Yukon River Chinook salmon Oncorhynchus tshawytscha .

    Science.gov (United States)

    Hamazaki, Toshihide; Kahler, Eryn; Borba, Bonnie M; Burton, Tamara

    2013-07-09

    We evaluated the comparability of culture and PCR tests for detecting Ichthyophonus in Yukon River Chinook salmon Oncorhynchus tshawytscha from field samples collected at 3 locations (Emmonak, Chena, and Salcha, Alaska, USA) in 2004, 2005, and 2006. Assuming diagnosis by culture as the 'true' infection status, we calculated the sensitivity (correctly identifying fish positive for Ichthyophonus), specificity (correctly identifying fish negative for Ichthyophonus), and accuracy (correctly identifying both positive and negative fish) of PCR. Regardless of sampling locations and years, sensitivity, specificity, and accuracy exceeded 90%. Estimates of infection prevalence by PCR were similar to those by culture, except for Salcha 2005, where prevalence by PCR was significantly higher than that by culture (p < 0.0001). These results show that the PCR test is comparable to the culture test for diagnosing Ichthyophonus infection.

  5. PG-2 photogrammetric plotter: a rapid and accurate means of mapping surface effects produced by subsurface nuclear testing at the Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Van de Werken, M.G.

    1983-01-01

    Since October 1981, the US Geological Survey has been using the Kern PG-2 photogrammetric plotter to map surface effects using post-test aerial photographs. The main goal of this pilot program was to compare the two mapping methods and to determine if field observations are necessary. Preliminary results indicate that only questionable small-scale features need to be field checked. Mapping on the plotter is highly reliable if aerial photographs obtained immediately after detonation are used. If photography is delayed, surface effects may be obliterated by natural processes and construction activities. Disadvantages to the plotter method relate to the quality and coverage of aerial photographs. The main problem concerns the scale of aerial photographs. Because of the large scale, the photographs lack adequate control points to properly orient the photographs to a map base. In addition, the paper print photographs used were often distorted. Once the problems were recognized and corrected, the method was greatly improved. Generally, the PG-2 offers a precise method for determining the distribution of surface effects

  6. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    Science.gov (United States)

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  7. Testing the sensitivity of terrestrial carbon models using remotely sensed biomass estimates

    Science.gov (United States)

    Hashimoto, H.; Saatchi, S. S.; Meyer, V.; Milesi, C.; Wang, W.; Ganguly, S.; Zhang, G.; Nemani, R. R.

    2010-12-01

    There is a large uncertainty in carbon allocation and biomass accumulation in forest ecosystems. With the recent availability of remotely sensed biomass estimates, we now can test some of the hypotheses commonly implemented in various ecosystem models. We used biomass estimates derived by integrating MODIS, GLAS and PALSAR data to verify above-ground biomass estimates simulated by a number of ecosystem models (CASA, BIOME-BGC, BEAMS, LPJ). This study extends the hierarchical framework (Wang et al., 2010) for diagnosing ecosystem models by incorporating independent estimates of biomass for testing and calibrating respiration, carbon allocation, turn-over algorithms or parameters.

  8. Estimating the four-factor product (ε p Pfnl Ptnl) for the accurate calculation of xenon and samarium reactivities in the Syrian Miniature Neutron Source Reactor

    International Nuclear Information System (INIS)

    Khattab, K.

    2007-01-01

    The modified 135 Xe equilibrium reactivity in the Syrian Miniature Neutron Source Reactor (MNSR) was calculated first by using the WIMSD4 and CITATION codes to estimate the four-factor product (ε p P f nl P t nl). Then, precise calculations of 135 Xe and 149 Sm concentrations and reactivities were carried out and compared during the reactor operation time and after shutdown. It was found that the 135 Xe and 149 Sm reactivities did not reach their equilibrium reactivities during the daily operating time of the reactor. The 149 Sm reactivities could be neglected compared to 135 Xe reactivities during the reactor operating time and after shutdown. (author)

  9. Estimating the four-factor product (ε p Pfnl Ptnl) for the accurate calculation of xenon and samarium reactivities in the Syrian Miniature Neutron Source Reactor

    International Nuclear Information System (INIS)

    Khattab, K.

    2007-01-01

    The modified 135 Xe equilibrium reactivity in the Syrian Miniature Neutron Source Reactor (MNSR) was calculated first by using the WIMSD4 and CITATION codes to estimate the four-factor product (ε p P fnl P tnl ). Then, precise calculations of 135 Xe and 149 Sm concentrations and reactivities were carried out and compared during the reactor operation time and after shutdown. It was found that the 135 Xe and 149 Sm reactivities did not reach their equilibrium reactivities during the daily operating time of the reactor. The 149 Sm reactivities could be neglected compared to 135 Xe reactivities during the reactor operating time and after shutdown. (author)

  10. Test models for estimating radiation balance in different scales for Jaboticabal, SP

    Directory of Open Access Journals (Sweden)

    Valquíria de Alencar Beserra

    2012-12-01

    Full Text Available The net radiation (Rn in agroecosystems is the amount of energy that is available in the environment to heating processes of living organisms, air and soil; perspiration of animals and plants; photosynthesis and water evaporation. The Rn defines the type of climate and weather conditions prevailing in a region affecting the availability and thermal water, the fundamental understanding of genotype-environment, which ultimately determine the productivity of the agricultural system. Rn usually is used in models of weather and climate studies. The sustainability and economic viability of zootechnical activity is dependent on the positive interaction between animal and environment. Environmental factors such as water, shading, thermal exchanges sensible heat (conduction, convection and radiation skin and latent heat losses (evaporation and transpiration, conditioned by Rn, must be managed to provide the best results. The present study was conducted to develop and test models for accurate and precise radiation balance on the scales daily, monthly and seasonal ten-day for Jaboticabal - SP, due to the importance of estimates of net radiation for agricultural activities. We used daily meteorological data from weather station located in Jaboticabal, SP (coordinates: 21 ° 14'05 "South, 48 ° 17'09" West, 615m altitude at Universidade Estadual Paulista "Júlio Mesquita Filho" - FCAV/UNESP in a situation of default grass "Bahiagrass" during the period 20/08/2005 to 20/01/2012. The data used were the maximum temperature (Tmax, minimum (Tmin and mean (TMED; maximum relative humidity (URMáx, minimum (URMín and average (URMéd precipitation (mm, average velocity (m/s, Qo, solar radiation (MJ m-2, sunshine (hour meter (MJ m², soil temperature at two depths (Tsoil2CM, Tsoil5CM and class A pan evaporation (TCA (mm. The measures taken by the balance radiometer were taken as a reference to test other models. The models tested were those reported by NORMAN et al

  11. Are Treponema pallidum specific rapid and point-of-care tests for syphilis accurate enough for screening in resource limited settings? Evidence from a meta-analysis.

    Directory of Open Access Journals (Sweden)

    Yalda Jafari

    Full Text Available Rapid and point-of-care (POC tests for syphilis are an invaluable screening tool, yet inadequate evaluation of their diagnostic accuracy against best reference standards limits their widespread global uptake. To fill this gap, a systematic review and meta-analysis was conducted to evaluate the sensitivity and specificity of rapid and POC tests in blood and serum samples against Treponema pallidum (TP specific reference standards.Five electronic databases (1980-2012 were searched, data was extracted from 33 articles, and Bayesian hierarchical models were fit.In serum samples, against a TP specific reference standard point estimates with 95% credible intervals (CrI for the sensitivities of popular tests were: i Determine, 90.04% (80.45, 95.21, ii SD Bioline, 87.06% (75.67, 94.50, iii VisiTect, 85.13% (72.83, 92.57, and iv Syphicheck, 74.48% (56.85, 88.44, while specificities were: i Syphicheck, 99.14% (96.37, 100, ii Visitect, 96.45% (91.92, 99.29, iii SD Bioline, 95.85% (89.89, 99.53, and iv Determine, 94.15% (89.26, 97.66. In whole blood samples, sensitivities were: i Determine, 86.32% (77.26, 91.70, ii SD Bioline, 84.50% (78.81, 92.61, iii Syphicheck, 74.47% (63.94, 82.13, and iv VisiTect, 74.26% (53.62, 83.68, while specificities were: i Syphicheck, 99.58% (98.91, 99.96, ii VisiTect, 99.43% (98.22, 99.98, iii SD Bioline, 97.95%(92.54, 99.33, and iv Determine, 95.85% (92.42, 97.74.Rapid and POC treponemal tests reported sensitivity and specificity estimates comparable to laboratory-based treponemal tests. In resource limited settings, where access to screening is limited and where risk of patients lost to follow up is high, the introduction of these tests has already been shown to improve access to screening and treatment to prevent stillbirths and neonatal mortality due to congenital syphilis. Based on the evidence, it is concluded that rapid and POC tests are useful in resource limited settings with poor access to laboratories or screening

  12. Estimating and Testing the Sources of Evoked Potentials in the Brain.

    Science.gov (United States)

    Huizenga, Hilde M.; Molenaar, Peter C. M.

    1994-01-01

    The source of an event-related brain potential (ERP) is estimated from multivariate measures of ERP on the head under several mathematical and physical constraints on the parameters of the source model. Statistical aspects of estimation are discussed, and new tests are proposed. (SLD)

  13. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    Science.gov (United States)

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  14. Evaluation of geostatistical parameters based on well tests; Estimation de parametres geostatistiques a partir de tests de puits

    Energy Technology Data Exchange (ETDEWEB)

    Gauthier, Y.

    1997-10-20

    Geostatistical tools are increasingly used to model permeability fields in subsurface reservoirs, which are considered as a particular random variable development depending of several geostatistical parameters such as variance and correlation length. The first part of the thesis is devoted to the study of relations existing between the transient well pressure (the well test) and the stochastic permeability field, using the apparent permeability concept.The well test performs a moving permeability average over larger and larger volume with increasing time. In the second part, the geostatistical parameters are evaluated using well test data; a Bayesian framework is used and parameters are estimated using the maximum likelihood principle by maximizing the well test data probability density function with respect to these parameters. This method, involving a well test fast evaluation, provides an estimation of the correlation length and the variance over different realizations of a two-dimensional permeability field

  15. Two-compartment, two-sample technique for accurate estimation of effective renal plasma flow: Theoretical development and comparison with other methods

    International Nuclear Information System (INIS)

    Lear, J.L.; Feyerabend, A.; Gregory, C.

    1989-01-01

    Discordance between effective renal plasma flow (ERPF) measurements from radionuclide techniques that use single versus multiple plasma samples was investigated. In particular, the authors determined whether effects of variations in distribution volume (Vd) of iodine-131 iodohippurate on measurement of ERPF could be ignored, an assumption implicit in the single-sample technique. The influence of Vd on ERPF was found to be significant, a factor indicating an important and previously unappreciated source of error in the single-sample technique. Therefore, a new two-compartment, two-plasma-sample technique was developed on the basis of the observations that while variations in Vd occur from patient to patient, the relationship between intravascular and extravascular components of Vd and the rate of iodohippurate exchange between the components are stable throughout a wide range of physiologic and pathologic conditions. The new technique was applied in a series of 30 studies in 19 patients. Results were compared with those achieved with the reference, single-sample, and slope-intercept techniques. The new two-compartment, two-sample technique yielded estimates of ERPF that more closely agreed with the reference multiple-sample method than either the single-sample or slope-intercept techniques

  16. An accurate density functional theory based estimation of pK(a) values of polar residues combined with experimental data: from amino acids to minimal proteins.

    Science.gov (United States)

    Matsui, Toru; Baba, Takeshi; Kamiya, Katsumasa; Shigeta, Yasuteru

    2012-03-28

    We report a scheme for estimating the acid dissociation constant (pK(a)) based on quantum-chemical calculations combined with a polarizable continuum model, where a parameter is determined for small reference molecules. We calculated the pK(a) values of variously sized molecules ranging from an amino acid to a protein consisting of 300 atoms. This scheme enabled us to derive a semiquantitative pK(a) value of specific chemical groups and discuss the influence of the surroundings on the pK(a) values. As applications, we have derived the pK(a) value of the side chain of an amino acid and almost reproduced the experimental value. By using our computing schemes, we showed the influence of hydrogen bonds on the pK(a) values in the case of tripeptides, which decreases the pK(a) value by 3.0 units for serine in comparison with those of the corresponding monopeptides. Finally, with some assumptions, we derived the pK(a) values of tyrosines and serines in chignolin and a tryptophan cage. We obtained quite different pK(a) values of adjacent serines in the tryptophan cage; the pK(a) value of the OH group of Ser13 exposed to bulk water is 14.69, whereas that of Ser14 not exposed to bulk water is 20.80 because of the internal hydrogen bonds.

  17. Energy shift estimation of demand response activation on domestic refrigerators – A field test study

    DEFF Research Database (Denmark)

    Lakshmanan, Venkatachalam; Gudmand-Høyer, Kristian; Marinelli, Mattia

    2014-01-01

    This paper presents a method to estimate the amount of energy that can be shifted during demand response (DR) activation on domestic refrigerator. Though there are many methods for DR activation like load reduction, load shifting and onsite generation, the method under study is load shifting....... Electric heating and cooling equipment like refrigerators, water heaters and space heaters and coolers are preferred for such DR activation because of their energy storing capacity. Accurate estimation of available regulating power and energy shift is important to understand the value of DR activation...... at any time. In this paper a novel method to estimate the available energy shift from domestic refrigerators with only two measurements, namely fridge cool chamber temperature and compressor power consumption is proposed, discussed and evaluated....

  18. The Estimation of Knowledge Solidity Based on the Comparative Analysis of Different Test Results

    Directory of Open Access Journals (Sweden)

    Y. K. Khenner

    2012-01-01

    Full Text Available At present, the testing techniques of knowledge estimation are widely spread in educational system. However, this method is seriously criticized including its application to the Unified State Examinations. The research is aimed at studying the limitations of testing techniques. The authors recommend a new way of knowledge solid- ity estimation bases on the comparative results analysis of various kinds of tests. While testing the large group of students, the authors found out that the results of the closed and open tests substantially differ. The comparative analysis demonstrates that the open tests assessment of the knowledge solidity is more adequate than that of the closed ones. As the research is only based on a single experiment, the authors recommend using this method further, substantiating the findings concerning the differences in tests results, and analyzing the advantages and disadvantages of the tests in question. 

  19. Comparison of the Danish step test and the watt-max test for estimation of maximal oxygen uptake

    DEFF Research Database (Denmark)

    Aadahl, Mette; Zacho, Morten; Linneberg, Allan René

    2013-01-01

    . Altogether, 795 eligible participants (response rate 35.8%) performed the watt max and the Danish step test. Correlation and agreement between the two VO(2max) test results was explored by Pearson's rho, Bland-Altman plots, Kappa(w), and gamma coefficients.Results: The correlation between VO(2max) (ml......Introduction: There is a need for simple and feasible methods for estimation of cardiorespiratory fitness (CRF) in large study populations, as existing methods for valid estimation of maximal oxygen consumption are generally time consuming and relatively expensive to administer. The Danish step...

  20. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  1. Estimation of the defect detection probability for ultrasonic tests on thick sections steel weldments. Technical report

    International Nuclear Information System (INIS)

    Johnson, D.P.; Toomay, T.L.; Davis, C.S.

    1979-02-01

    An inspection uncertainty analysis of published PVRC Specimen 201 data is reported to obtain an estimate of the probability of recording an indication as a function of imperfection height for ASME Section XI Code ultrasonic inspections of the nuclear reactor vessel plate seams and to demonstrate the advantages of inspection uncertainty analysis over conventional detection/nondetection counting analysis. This analysis found the probability of recording a significant defect with an ASME Section XI Code ultrasonic inspection to be very high, if such a defect should exist in the plate seams of a nuclear reactor vessel. For a one-inch high crack, for example, this analysis gives a best estimate recording probability of .985 and a 90% lower confidence bound recording probabilty of .937. It is also shown that inspection uncertainty analysis gives more accurate estimates and gives estimates over a much greater flaw size range than is possible with conventional analysis. There is reason to believe that the estimation procedure used is conservative, the estimation is based on data generated several years ago, on very small defects, in an environment that is different from the actual in-service inspection environment

  2. Estimating Full IM240 Emissions from Partial Test Results: Evidence from Arizona.

    Science.gov (United States)

    Ando, Amy W; Harrington, Winston; McConnell, Virginia

    1999-10-01

    The expense and inconvenience of enhanced-vehicle-emissions testing using the full 240-second dynamometer test has led states to search for ways to shorten the test process. In fact, all states that currently use the IM240 allow some type of fast-pass, usually as early in the test as second 31, and Arizona has allowed vehicles to fast-fail after second 93. While these shorter tests save states millions of dollars in inspection lanes and driver costs, there is a loss of information since test results are no longer comparable across vehicles. This paper presents a methodology for estimating full 240-second results from partial-test results for three pollutants: HC, CO, and NO x . If states can convert all tests to consistent IM240 readings, they will be able to better characterize fleet emissions and to evaluate the impact of inspection and maintenance and other programs on emissions over time. Using a random sample of vehicles in Arizona which received full 240-second tests, we use regression analysis to estimate the relationship between emissions at second 240 and emissions at earlier seconds in the test. We examine the influence of other variables such as age, model-year group, and the pollution level itself on this relationship. We also use the estimated coefficients in several applications. First, we try to shed light on the frequent assertion that the results of the dynamometer test provide guidance for vehicle repair of failing vehicles. Using a probit analysis, we find that the probability that a failing vehicle will pass the test on the first retest is greater the longer the test has progressed. Second, we test the accuracy of our estimates for forecasting fleet emissions from partial-test emissions results in Arizona. We find forecasted fleet average emissions to be very close to the actual fleet averages for light-duty vehicles, but not quite as good for trucks, particularly when NO x emissions are forecast.

  3. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-01-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear–quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18–30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8–30.9 Gy) and 22.0 Gy (range, 20.2–26.6 Gy), respectively. By use of conventional values for α/β, volume parameter n, 50% complication probability dose TD 50 , and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of α/β and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of α/β and n yielded better predictions (0.7 complications), with n = 0.023 and α/β = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high α/β value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models traditionally used to estimate spinal cord NTCP

  4. Estimating the Proportion of True Null Hypotheses in Multiple Testing Problems

    Directory of Open Access Journals (Sweden)

    Oluyemi Oyeniran

    2016-01-01

    Full Text Available The problem of estimating the proportion, π0, of the true null hypotheses in a multiple testing problem is important in cases where large scale parallel hypotheses tests are performed independently. While the problem is a quantity of interest in its own right in applications, the estimate of π0 can be used for assessing or controlling an overall false discovery rate. In this article, we develop an innovative nonparametric maximum likelihood approach to estimate π0. The nonparametric likelihood is proposed to be restricted to multinomial models and an EM algorithm is also developed to approximate the estimate of π0. Simulation studies show that the proposed method outperforms other existing methods. Using experimental microarray datasets, we demonstrate that the new method provides satisfactory estimate in practice.

  5. Testing a statistical method of global mean palotemperature estimations in a long climate simulation

    Energy Technology Data Exchange (ETDEWEB)

    Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2001-07-01

    Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)

  6. National South African HIV prevalence estimates robust despite substantial test non-participation

    Directory of Open Access Journals (Sweden)

    Guy Harling

    2017-07-01

    Full Text Available Background. South African (SA national HIV seroprevalence estimates are of crucial policy relevance in the country, and for the worldwide HIV response. However, the most recent nationally representative HIV test survey in 2012 had 22% test non-participation, leaving the potential for substantial bias in current seroprevalence estimates, even after controlling for selection on observed factors. Objective. To re-estimate national HIV prevalence in SA, controlling for bias due to selection on both observed and unobserved factors in the 2012 SA National HIV Prevalence, Incidence and Behaviour Survey. Methods. We jointly estimated regression models for consent to test and HIV status in a Heckman-type bivariate probit framework. As selection variable, we used assigned interviewer identity, a variable known to predict consent but highly unlikely to be associated with interviewees’ HIV status. From these models, we estimated the HIV status of interviewed participants who did not test. Results. Of 26 710 interviewed participants who were invited to test for HIV, 21.3% of females and 24.3% of males declined. Interviewer identity was strongly correlated with consent to test for HIV; declining a test was weakly associated with HIV serostatus. Our HIV prevalence estimates were not significantly different from those using standard methods to control for bias due to selection on observed factors: 15.1% (95% confidence interval (CI 12.1 - 18.6 v. 14.5% (95% CI 12.8 - 16.3 for 15 - 49-year-old males; 23.3% (95% CI 21.7 - 25.8 v. 23.2% (95% CI 21.3 - 25.1 for 15 - 49-year-old females. Conclusion. The most recent SA HIV prevalence estimates are robust under the strongest available test for selection bias due to missing data. Our findings support the reliability of inferences drawn from such data.

  7. Accurate Laser Measurements of the Water Vapor Self-Continuum Absorption in Four Near Infrared Atmospheric Windows. a Test of the MT_CKD Model.

    Science.gov (United States)

    Campargue, Alain; Kassi, Samir; Mondelain, Didier; Romanini, Daniele; Lechevallier, Loïc; Vasilchenko, Semyon

    2017-06-01

    The semi empirical MT_CKD model of the absorption continuum of water vapor is widely used in atmospheric radiative transfer codes of the atmosphere of Earth and exoplanets but lacks of experimental validation in the atmospheric windows. Recent laboratory measurements by Fourier transform Spectroscopy have led to self-continuum cross-sections much larger than the MT_CKD values in the near infrared transparency windows. In the present work, we report on accurate water vapor absorption continuum measurements by Cavity Ring Down Spectroscopy (CRDS) and Optical-Feedback-Cavity Enhanced Laser Spectroscopy (OF-CEAS) at selected spectral points of the transparency windows centered around 4.0, 2.1 and 1.25 μm. The temperature dependence of the absorption continuum at 4.38 μm and 3.32 μm is measured in the 23-39 °C range. The self-continuum water vapor absorption is derived either from the baseline variation of spectra recorded for a series of pressure values over a small spectral interval or from baseline monitoring at fixed laser frequency, during pressure ramps. In order to avoid possible bias approaching the water saturation pressure, the maximum pressure value was limited to about 16 Torr, corresponding to a 75% humidity rate. After subtraction of the local water monomer lines contribution, self-continuum cross-sections, C_{S}, were determined with a few % accuracy from the pressure squared dependence of the spectra base line level. Together with our previous CRDS and OF-CEAS measurements in the 2.1 and 1.6 μm windows, the derived water vapor self-continuum provides a unique set of water vapor self-continuum cross-sections for a test of the MT_CKD model in four transparency windows. Although showing some important deviations of the absolute values (up to a factor of 4 at the center of the 2.1 μm window), our accurate measurements validate the overall frequency dependence of the MT_CKD2.8 model.

  8. An empirical comparison of effective concentration estimators for evaluating aquatic toxicity test responses

    Energy Technology Data Exchange (ETDEWEB)

    Bailer, A.J.; Hughes, M.R.; Denton, D.L.; Oris, J.T.

    2000-01-01

    Aquatic toxicity tests are statistically evaluated by either hypothesis testing procedures to derive a no-observed-effect concentration or by inverting regression models to calculate the concentration associated with a specific reduction from the control response. These latter methods can be described as potency estimation methods. Standard US Environmental Protection Agency (USEPA) potency estimation methods are based on two different techniques. For continuous or count response data, a nominally nonparametric method that assumes monotonic decreasing responses and piecewise linear patterns between successive concentration groups is used. For quantal responses, a probit regression model with a linear dose term is fit. These techniques were compared with a recently developed parametric regression-based estimator, the relative inhibition estimator, RIp. This method is based on fitting generalized linear models, followed by estimation of the concentration associated with a particular decrement relative to control responses. These estimators, with levels of inhibition (p) of 25 and 50%, were applied to a series of chronic toxicity tests in a US EPA region 9 database of reference toxicity tests. Biological responses evaluated in these toxicity tests included the number of young produced in three broods by the water flea (Ceriodaphnia dubia) and germination success and tube length data from the giant kelp (Macrocystis pyrifera). The greatest discrepancy between the RIp and standard US EPA estimators was observed for C. dubia. The concentration-response pattern for this biological endpoint exhibited nonmonotonicity more frequently than for any of the other endpoint. Future work should consider optimal experimental designs to estimate these quantities, methods for constructing confidence intervals, and simulation studies to explore the behavior of these estimators under known conditions.

  9. Estimation of Fracture Porosity in an Unsaturated Fractured Welded Tuff Using Gas Tracer Testing

    Energy Technology Data Exchange (ETDEWEB)

    B.M. Freifeild

    2001-10-18

    Kinematic fracture porosity is an important hydrologic transport parameter for predicting the potential of rapid contaminant migration through fractured rock. The transport velocity of a solute moving within a fracture network is inversely related to the fracture porosity. Since fracture porosity is often one or two orders of magnitude smaller than matrix porosity, and fracture permeability is often orders of magnitude greater than matrix permeability, solutes may travel significantly faster in the fracture network than in the surrounding matrix. This dissertation introduces a new methodology for conducting gas tracer tests using a field portable mass spectrometer along with analytical tools for estimating fracture porosity using the measured tracer concentration breakthrough curves. Field experiments were conducted at Yucca Mountain, Nevada, consisting of air-permeability transient testing and gas-tracer-transport tests. The experiments were conducted from boreholes drilled within an underground tunnel as part of an investigation of rock mass hydrological behavior. Air-permeability pressure transients, recorded during constant mass flux injections, have been analyzed using a numerical inversion procedure to identify fracture permeability and porosity. Dipole gas tracer tests have also been conducted from the same boreholes used for air-permeability testing. Mass breakthrough data has been analyzed using a random walk particle-tracking model, with a dispersivity that is a function of the advective velocity. The estimated fracture porosity using the tracer test and air-injection test data ranges from .001 to .015. These values are an order of magnitude greater than the values estimated by others using hydraulically estimated fracture apertures. The estimates of porosity made using air-permeability test data are shown to be highly sensitive to formation heterogeneity. Uncertainty analyses performed on the gas tracer test results show high confidence in the parameter

  10. Estimation of fracture porosity in an unsaturated fractured welded tuff using gas tracer testing

    Energy Technology Data Exchange (ETDEWEB)

    Freifeld, Barry Mark [Univ. of California, Berkeley, CA (United States)

    2001-12-01

    Kinematic fracture porosity is an important hydrologic transport parameter for predicting the potential of rapid contaminant migration through fractured rock. The transport velocity of a solute moving within a fracture network is inversely related to the fracture porosity. Since fracture porosity is often one or two orders of magnitude smaller than matrix porosity, and fracture permeability is often orders of magnitude greater than matrix permeability, solutes may travel significantly faster in the fracture network than in the surrounding matrix. This dissertation introduces a new methodology for conducting gas tracer tests using a field portable mass spectrometer along with analytical tools for estimating fracture porosity using the measured tracer concentration breakthrough curves. Field experiments were conducted at Yucca Mountain, Nevada, consisting of air-permeability transient testing and gas-tracer-transport tests. The experiments were conducted from boreholes drilled within an underground tunnel as part of an investigation of rock mass hydrological behavior. Air-permeability pressure transients, recorded during constant mass flux injections, have been analyzed using a numerical inversion procedure to identify fracture permeability and porosity. Dipole gas tracer tests have also been conducted from the same boreholes used for air-permeability testing. Mass breakthrough data has been analyzed using a random walk particle-tracking model, with a dispersivity that is a function of the advective velocity. The estimated fracture porosity using the tracer test and air-injection test data ranges from .001 to .015. These values are an order of magnitude greater than the values estimated by others using hydraulically estimated fracture apertures. The estimates of porosity made using air-permeability test data are shown to be highly sensitive to formation heterogeneity. Uncertainty analyses performed on the gas tracer test results show high confidence in the parameter

  11. Estimation of Fracture Porosity in an Unsaturated Fractured Welded Tuff Using Gas Tracer Testing

    International Nuclear Information System (INIS)

    B.M. Freifeild

    2001-01-01

    Kinematic fracture porosity is an important hydrologic transport parameter for predicting the potential of rapid contaminant migration through fractured rock. The transport velocity of a solute moving within a fracture network is inversely related to the fracture porosity. Since fracture porosity is often one or two orders of magnitude smaller than matrix porosity, and fracture permeability is often orders of magnitude greater than matrix permeability, solutes may travel significantly faster in the fracture network than in the surrounding matrix. This dissertation introduces a new methodology for conducting gas tracer tests using a field portable mass spectrometer along with analytical tools for estimating fracture porosity using the measured tracer concentration breakthrough curves. Field experiments were conducted at Yucca Mountain, Nevada, consisting of air-permeability transient testing and gas-tracer-transport tests. The experiments were conducted from boreholes drilled within an underground tunnel as part of an investigation of rock mass hydrological behavior. Air-permeability pressure transients, recorded during constant mass flux injections, have been analyzed using a numerical inversion procedure to identify fracture permeability and porosity. Dipole gas tracer tests have also been conducted from the same boreholes used for air-permeability testing. Mass breakthrough data has been analyzed using a random walk particle-tracking model, with a dispersivity that is a function of the advective velocity. The estimated fracture porosity using the tracer test and air-injection test data ranges from .001 to .015. These values are an order of magnitude greater than the values estimated by others using hydraulically estimated fracture apertures. The estimates of porosity made using air-permeability test data are shown to be highly sensitive to formation heterogeneity. Uncertainty analyses performed on the gas tracer test results show high confidence in the parameter

  12. Systematic Testing of Belief-Propagation Estimates for Absolute Free Energies in Atomistic Peptides and Proteins.

    Science.gov (United States)

    Donovan-Maiye, Rory M; Langmead, Christopher J; Zuckerman, Daniel M

    2018-01-09

    Motivated by the extremely high computing costs associated with estimates of free energies for biological systems using molecular simulations, we further the exploration of existing "belief propagation" (BP) algorithms for fixed-backbone peptide and protein systems. The precalculation of pairwise interactions among discretized libraries of side-chain conformations, along with representation of protein side chains as nodes in a graphical model, enables direct application of the BP approach, which requires only ∼1 s of single-processor run time after the precalculation stage. We use a "loopy BP" algorithm, which can be seen as an approximate generalization of the transfer-matrix approach to highly connected (i.e., loopy) graphs, and it has previously been applied to protein calculations. We examine the application of loopy BP to several peptides as well as the binding site of the T4 lysozyme L99A mutant. The present study reports on (i) the comparison of the approximate BP results with estimates from unbiased estimators based on the Amber99SB force field; (ii) investigation of the effects of varying library size on BP predictions; and (iii) a theoretical discussion of the discretization effects that can arise in BP calculations. The data suggest that, despite their approximate nature, BP free-energy estimates are highly accurate-indeed, they never fall outside confidence intervals from unbiased estimators for the systems where independent results could be obtained. Furthermore, we find that libraries of sufficiently fine discretization (which diminish library-size sensitivity) can be obtained with standard computing resources in most cases. Altogether, the extremely low computing times and accurate results suggest the BP approach warrants further study.

  13. Monte Carlo comparison of four normality tests using different entropy estimates

    Czech Academy of Sciences Publication Activity Database

    Esteban, M. D.; Castellanos, M. E.; Morales, D.; Vajda, Igor

    2001-01-01

    Roč. 30, č. 4 (2001), s. 761-785 ISSN 0361-0918 R&D Projects: GA ČR GA102/99/1137 Institutional research plan: CEZ:AV0Z1075907 Keywords : test of normality * entropy test and entropy estimator * table of critical values Subject RIV: BD - Theory of Information Impact factor: 0.153, year: 2001

  14. A multivariate family-based association test using generalized estimating equations : FBAT-GEE

    NARCIS (Netherlands)

    Lange, C; Silverman, SK; Xu, [No Value; Weiss, ST; Laird, NM

    In this paper we propose a multivariate extension of family-based association tests based on generalized estimating equations. The test can be applied to multiple phenotypes and to phenotypic data obtained in longitudinal studies without making any distributional assumptions for the phenotypic

  15. Gas Cooled Fast Breeder Reactor cost estimate for a circulator test facility (modified HTGR circulator test facility)

    International Nuclear Information System (INIS)

    1979-10-01

    This is a conceptual design cost estimate for a Helium Circulator Test Facility to be located at the General Atomic Company, San Diego, California. The circulator, drive motors, controllers, thermal barrier, and circulator service module installation costs are part of the construction cost included

  16. Development of estimation algorithm of loose parts and analysis of impact test data

    International Nuclear Information System (INIS)

    Kim, Jung Soo; Ham, Chang Sik; Jung, Chul Hwan; Hwang, In Koo; Kim, Tak Hwane; Kim, Tae Hwane; Park, Jin Ho

    1999-11-01

    Loose parts are produced by being parted from the structure of the reactor coolant system or by coming into RCS from the outside during test operation, refueling, and overhaul time. These loose parts are mixed with reactor coolant fluid and collide with RCS components. When loose parts are occurred within RCS, it is necessary to estimate the impact point and the mass of loose parts. In this report an analysis algorithm for the estimation of the impact point and mass of loose part is developed. The developed algorithm was tested with the impact test data of Yonggwang-3. The estimated impact point using the proposed algorithm in this report had 5 percent error to the real test data. The estimated mass was analyzed within 28 percent error bound using the same unit's data. We analyzed the characteristic frequency of each sensor because this frequency effected the estimation of impact point and mass. The characteristic frequency of the background noise during normal operation was compared with that of the impact test data. The result of the comparison illustrated that the characteristic frequency bandwidth of the impact test data was lower than that of the background noise during normal operation. by the comparison, the integrity of sensor and monitoring system could be checked, too. (author)

  17. Development, test and evaluation of a computerized procedure for using Landsat data to estimate spring small grains acreage

    Science.gov (United States)

    Mohler, R. R. J.; Palmer, W. F.; Smyrski, M. M.; Baker, T. C.; Nazare, C. V.

    1982-01-01

    A number of methods which can provide information concerning crop acreages on the basis of a utilization of multispectral scanner (MSS) data require for their implementation a comparatively large amount of labor. The present investigation is concerned with a project designed to improve the efficiency of analysis through increased automation. The Caesar technique was developed to realize this objective. The processability rates of the Caesar procedure versus the historical state-of-the-art proportion estimation procedures were determined in an experiment. Attention is given to the study site, the aggregation technology, the results of the aggregation test, and questions of error characterization. It is found that the Caesar procedure, which has been developed for the spring small grains region of North America, is highly efficient and provides accurate results.

  18. The detection of the methylated Wif-1 gene is more accurate than a fecal occult blood test for colorectal cancer screening

    KAUST Repository

    Amiot, Aurelien; Mansour, Hicham; Baumgaertner, Isabelle; Delchier, Jean-Charles; Tournigand, Christophe; Furet, Jean-Pierre; Carrau, Jean-Pierre; Canoui-Poitrine, Florence; Sobhani, Iradj

    2014-01-01

    Background: The clinical benefit of guaiac fecal occult blood tests (FOBT) is now well established for colorectal cancer screening. Growing evidence has demonstrated that epigenetic modifications and fecal microbiota changes, also known as dysbiosis, are associated with CRC pathogenesis and might be used as surrogate markers of CRC. Patients and Methods: We performed a cross-sectional study that included all consecutive subjects that were referred (from 2003 to 2007) for screening colonoscopies. Prior to colonoscopy, effluents (fresh stools, sera-S and urine-U) were harvested and FOBTs performed. Methylation levels were measured in stools, S and U for 3 genes (Wif1, ALX-4, and Vimentin) selected from a panel of 63 genes; Kras mutations and seven dominant and subdominant bacterial populations in stools were quantified. Calibration was assessed with the Hosmer-Lemeshow chi-square, and discrimination was determined by calculating the C-statistic (Area Under Curve) and Net Reclassification Improvement index. Results: There were 247 individuals (mean age 60.8±12.4 years, 52% of males) in the study group, and 90 (36%) of these individuals were patients with advanced polyps or invasive adenocarcinomas. A multivariate model adjusted for age and FOBT led to a C-statistic of 0.83 [0.77-0.88]. After supplementary sequential (one-by-one) adjustment, Wif-1 methylation (S or U) and fecal microbiota dysbiosis led to increases of the C-statistic to 0.90 [0.84-0.94] (p = 0.02) and 0.81 [0.74-0.86] (p = 0.49), respectively. When adjusted jointly for FOBT and Wif-1 methylation or fecal microbiota dysbiosis, the increase of the C-statistic was even more significant (0.91 and 0.85, p<0.001 and p = 0.10, respectively). Conclusion: The detection of methylated Wif-1 in either S or U has a higher performance accuracy compared to guaiac FOBT for advanced colorectal neoplasia screening. Conversely, fecal microbiota dysbiosis detection was not more accurate. Blood and urine testing could be

  19. The detection of the methylated Wif-1 gene is more accurate than a fecal occult blood test for colorectal cancer screening

    KAUST Repository

    Amiot, Aurelien

    2014-07-15

    Background: The clinical benefit of guaiac fecal occult blood tests (FOBT) is now well established for colorectal cancer screening. Growing evidence has demonstrated that epigenetic modifications and fecal microbiota changes, also known as dysbiosis, are associated with CRC pathogenesis and might be used as surrogate markers of CRC. Patients and Methods: We performed a cross-sectional study that included all consecutive subjects that were referred (from 2003 to 2007) for screening colonoscopies. Prior to colonoscopy, effluents (fresh stools, sera-S and urine-U) were harvested and FOBTs performed. Methylation levels were measured in stools, S and U for 3 genes (Wif1, ALX-4, and Vimentin) selected from a panel of 63 genes; Kras mutations and seven dominant and subdominant bacterial populations in stools were quantified. Calibration was assessed with the Hosmer-Lemeshow chi-square, and discrimination was determined by calculating the C-statistic (Area Under Curve) and Net Reclassification Improvement index. Results: There were 247 individuals (mean age 60.8±12.4 years, 52% of males) in the study group, and 90 (36%) of these individuals were patients with advanced polyps or invasive adenocarcinomas. A multivariate model adjusted for age and FOBT led to a C-statistic of 0.83 [0.77-0.88]. After supplementary sequential (one-by-one) adjustment, Wif-1 methylation (S or U) and fecal microbiota dysbiosis led to increases of the C-statistic to 0.90 [0.84-0.94] (p = 0.02) and 0.81 [0.74-0.86] (p = 0.49), respectively. When adjusted jointly for FOBT and Wif-1 methylation or fecal microbiota dysbiosis, the increase of the C-statistic was even more significant (0.91 and 0.85, p<0.001 and p = 0.10, respectively). Conclusion: The detection of methylated Wif-1 in either S or U has a higher performance accuracy compared to guaiac FOBT for advanced colorectal neoplasia screening. Conversely, fecal microbiota dysbiosis detection was not more accurate. Blood and urine testing could be

  20. The detection of the methylated Wif-1 gene is more accurate than a fecal occult blood test for colorectal cancer screening.

    Directory of Open Access Journals (Sweden)

    Aurelien Amiot

    Full Text Available The clinical benefit of guaiac fecal occult blood tests (FOBT is now well established for colorectal cancer screening. Growing evidence has demonstrated that epigenetic modifications and fecal microbiota changes, also known as dysbiosis, are associated with CRC pathogenesis and might be used as surrogate markers of CRC.We performed a cross-sectional study that included all consecutive subjects that were referred (from 2003 to 2007 for screening colonoscopies. Prior to colonoscopy, effluents (fresh stools, sera-S and urine-U were harvested and FOBTs performed. Methylation levels were measured in stools, S and U for 3 genes (Wif1, ALX-4, and Vimentin selected from a panel of 63 genes; Kras mutations and seven dominant and subdominant bacterial populations in stools were quantified. Calibration was assessed with the Hosmer-Lemeshow chi-square, and discrimination was determined by calculating the C-statistic (Area Under Curve and Net Reclassification Improvement index.There were 247 individuals (mean age 60.8±12.4 years, 52% of males in the study group, and 90 (36% of these individuals were patients with advanced polyps or invasive adenocarcinomas. A multivariate model adjusted for age and FOBT led to a C-statistic of 0.83 [0.77-0.88]. After supplementary sequential (one-by-one adjustment, Wif-1 methylation (S or U and fecal microbiota dysbiosis led to increases of the C-statistic to 0.90 [0.84-0.94] (p = 0.02 and 0.81 [0.74-0.86] (p = 0.49, respectively. When adjusted jointly for FOBT and Wif-1 methylation or fecal microbiota dysbiosis, the increase of the C-statistic was even more significant (0.91 and 0.85, p<0.001 and p = 0.10, respectively.The detection of methylated Wif-1 in either S or U has a higher performance accuracy compared to guaiac FOBT for advanced colorectal neoplasia screening. Conversely, fecal microbiota dysbiosis detection was not more accurate. Blood and urine testing could be used in those individuals reluctant to

  1. Life estimation I and C cable insulation materials based on accelerated life testing accelerated life testing

    International Nuclear Information System (INIS)

    Santhosh, T.V.; Ramteke, P.K.; Shrestha, N.B.; Ahirwar, A.K.; Gopika, V.

    2016-01-01

    Accelerated Iife tests are becoming increasingly popular in today's industry due to the need for obtaining life data quickly and reliably. Life testing of products under higher stress levels without introducing additional failure modes can provide significant savings of both time and money. Correct analysis of data gathered via such accelerated life testing will yield parameters and other information for the product's life under use stress conditions. To be of practical use in assessing the operational behaviour of cables in NPPs, laboratory ageing aims to mimic the type of degradation observed under operational conditions. Conditions of testing therefore need to be carefully chosen to ensure that the degradation mechanism occurring in the accelerated tests are similar to those which occur in service. This paper presents the results of an investigation in which the elongation-at-break (EAB) measurements were carried on a typical control cable to predict the mean life at service conditions. A low voltage polyvinyl chloride (PVC) insulated and PVC sheathed control cable, used in NPP instrumentation and control (I and C) applications, was subjected thermal ageing at three elevated temperatures

  2. Estimation of component failure probability from masked binomial system testing data

    International Nuclear Information System (INIS)

    Tan Zhibin

    2005-01-01

    The component failure probability estimates from analysis of binomial system testing data are very useful because they reflect the operational failure probability of components in the field which is similar to the test environment. In practice, this type of analysis is often confounded by the problem of data masking: the status of tested components is unknown. Methods in considering this type of uncertainty are usually computationally intensive and not practical to solve the problem for complex systems. In this paper, we consider masked binomial system testing data and develop a probabilistic model to efficiently estimate component failure probabilities. In the model, all system tests are classified into test categories based on component coverage. Component coverage of test categories is modeled by a bipartite graph. Test category failure probabilities conditional on the status of covered components are defined. An EM algorithm to estimate component failure probabilities is developed based on a simple but powerful concept: equivalent failures and tests. By simulation we not only demonstrate the convergence and accuracy of the algorithm but also show that the probabilistic model is capable of analyzing systems in series, parallel and any other user defined structures. A case study illustrates an application in test case prioritization

  3. Estimation of the common cause failure probabilities of the components under mixed testing schemes

    International Nuclear Information System (INIS)

    Kang, Dae Il; Hwang, Mee Jeong; Han, Sang Hoon

    2009-01-01

    For the case where trains or channels of standby safety systems consisting of more than two redundant components are tested in a staggered manner, the standby safety components within a train can be tested simultaneously or consecutively. In this case, mixed testing schemes, staggered and non-staggered testing schemes, are used for testing the components. Approximate formulas, based on the basic parameter method, were developed for the estimation of the common cause failure (CCF) probabilities of the components under mixed testing schemes. The developed formulas were applied to the four redundant check valves of the auxiliary feed water system as a demonstration study for their appropriateness. For a comparison, we estimated the CCF probabilities of the four redundant check valves for the mixed, staggered, and non-staggered testing schemes. The CCF probabilities of the four redundant check valves for the mixed testing schemes were estimated to be higher than those for the staggered testing scheme, and lower than those for the non-staggered testing scheme.

  4. Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.

    Science.gov (United States)

    Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo

    2012-01-01

    The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.

  5. Differences among estimates of critical power and anaerobic work capacity derived from five mathematical models and the three-minute all-out test.

    Science.gov (United States)

    Bergstrom, Haley C; Housh, Terry J; Zuniga, Jorge M; Traylor, Daniel A; Lewis, Robert W; Camic, Clayton L; Schmidt, Richard J; Johnson, Glen O

    2014-03-01

    Estimates of critical power (CP) and anaerobic work capacity (AWC) from the power output vs. time relationship have been derived from various mathematical models. The purpose of this study was to examine estimates of CP and AWC from the multiple work bout, 2- and 3-parameter models, and those from the 3-minute all-out CP (CP3min) test. Nine college-aged subjects performed a maximal incremental test to determine the peak oxygen consumption rate and the gas exchange threshold. On separate days, each subject completed 4 randomly ordered constant power output rides to exhaustion to estimate CP and AWC from 5 regression models (2 linear, 2 nonlinear, and 1 exponential). During the final visit, CP and AWC were estimated from the CP3min test. The nonlinear 3-parameter (Nonlinear-3) model produced the lowest estimate of CP. The exponential (EXP) model and the CP3min test were not statistically different and produced the highest estimates of CP. Critical power estimated from the Nonlinear-3 model was 14% less than those from the EXP model and the CP3min test and 4-6% less than those from the linear models. Furthermore, the Nonlinear-3 and nonlinear 2-parameter (Nonlinear-2) models produced significantly greater estimates of AWC than did the linear models and CP3min. The current findings suggested that the Nonlinear-3 model may provide estimates of CP and AWC that more accurately reflect the asymptote of the power output vs. time relationship, the demarcation of the heavy and severe exercise intensity domains, and anaerobic capabilities than will the linear models and CP3min test.

  6. An apparatus to estimate the hydrodynamic coefficients of autonomous underwater vehicles using water tunnel testing.

    Science.gov (United States)

    Nouri, N M; Mostafapour, K; Bahadori, R

    2016-06-01

    Hydrodynamic coefficients or hydrodynamic derivatives of autonomous underwater vehicles (AUVs) play an important role in their development and maneuverability. The most popular way of estimating their coefficients is to implement captive model tests such as straight line tests and planar motion mechanism (PMM) tests in the towing tanks. This paper aims to develop an apparatus based on planar experiments of water tunnel in order to estimate hydrodynamic derivatives due to AUVs' acceleration and velocity. The capability of implementing straight line tests and PMM ones using mechanical oscillators located in the downstream flow of the model is considered in the design procedure of the system. The hydrodynamic derivatives that resulted from the acceleration and velocity of the AUV model were estimated using the apparatus that we developed. Static and dynamics test results were compared for the similar derivatives. The findings showed that the system provided the basis for conducting static tests, i.e., straight-line and dynamic tests that included pure pitch and pure heave. By conducting such tests in a water tunnel, we were able to eliminate errors related to the time limitation of the tests and the effects of surface waves in the towing tank on AUVs with applications in the deep sea.

  7. Historical estimates of external gamma exposure and collective external gamma exposure from testing at the Nevada Test Site. I. Test series through HARDTACK II, 1958

    Energy Technology Data Exchange (ETDEWEB)

    Anspaugh, L.R.; Church, B.W.

    1985-12-01

    In 1959, the Test Manager's Committee to Establish Fallout Doses calculated estimated external gamma exposure at populated locations based upon measurements of external gamma-exposure rate. Using these calculations and estimates of population, we have tabulated the collective estimated external gamma exposures for communities within established fallout patterns. The total collective estimated external gamma exposure is 85,000 person-R. The greatest collective exposures occurred in three general areas: Saint George, Utah; Ely, Nevada; and Las Vegas, Nevada. Three events, HARRY (May 19, 1953), BEE (March 22, 1955), and SMOKY (August 31, 1957), accounted for over half of the total collective estimated external gamma exposure. The bases of the calculational models for external gamma exposure of ''infinite exposure,'' ''estimated exposure,'' and ''one year effective biological exposure'' are explained. 4 figs., 7 tabs.

  8. Historical estimates of external gamma exposure and collective external gamma exposure from testing at the Nevada Test Site. I. Test series through HARDTACK II, 1958

    International Nuclear Information System (INIS)

    Anspaugh, L.R.; Church, B.W.

    1986-01-01

    In 1959, the Test Manager's Committee to Establish Fallout Doses calculated estimated external gamma exposure at populated locations based upon measurements of external gamma-exposure rate. Using these calculations and estimates of population, we have tabulated the collective estimated external gamma exposures for communities within established fallout patterns. The total collective estimated external gamma exposure is 85,000 person-R. The greatest collective exposures occurred in three general areas: Saint George, UT; Ely, NV; and Las Vegas, NV. Three events, HARRY (19 May 1953), BEE (22 March 1955), and SMOKY (31 August 1957), accounted for more than half the total collective estimated external gamma exposure. The bases of the calculational models for external gamma exposure of infinite exposure, estimated exposure, and 1-yr effective biological exposure are explained

  9. Historical estimates of external gamma exposure and collective external gamma exposure from testing at the Nevada Test Site. I. Test series through HARDTACK II, 1958

    International Nuclear Information System (INIS)

    Anspaugh, L.R.; Church, B.W.

    1985-12-01

    In 1959, the Test Manager's Committee to Establish Fallout Doses calculated estimated external gamma exposure at populated locations based upon measurements of external gamma-exposure rate. Using these calculations and estimates of population, we have tabulated the collective estimated external gamma exposures for communities within established fallout patterns. The total collective estimated external gamma exposure is 85,000 person-R. The greatest collective exposures occurred in three general areas: Saint George, Utah; Ely, Nevada; and Las Vegas, Nevada. Three events, HARRY (May 19, 1953), BEE (March 22, 1955), and SMOKY (August 31, 1957), accounted for over half of the total collective estimated external gamma exposure. The bases of the calculational models for external gamma exposure of ''infinite exposure,'' ''estimated exposure,'' and ''one year effective biological exposure'' are explained. 4 figs., 7 tabs

  10. Estimation of the two-dimensional presampled modulation transfer function of digital radiography devices using one-dimensional test objects

    International Nuclear Information System (INIS)

    Wells, Jered R.; Dobbins, James T. III

    2012-01-01

    Purpose: The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1D test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Methods: Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ/i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. Results: The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm −1 ) and approximate circular symmetry at frequencies below 4 mm −1 . While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm −1 . Slit measurement near 45° revealed radial asymmetry in the MTF resulting from the square

  11. Estimation of the Altai region population exposure resulting from the nuclear tests at the Semipalatinsk test site

    International Nuclear Information System (INIS)

    Djachenko, V.I.; Gabbasov, M.N.; Laborev, V.M.; Markovtsev, A.S.; Sudakov, V.V.; Volobuyev, N.M.; Zelenov, V.I.; Lagutin, A.A.; Shoikher, J.N.

    1998-01-01

    The historical roots of reconstruction of doses received by populations from nuclear tests date back to the 60''s, when the world faced a problem of growing radioactive contamination by radioactive fallout resulting from atmospheric nuclear tests. Since then, only one aspect of this problem has been properly developed, namely: public-exposure doses resulting from the global radioactive fallout have been estimated. Local fallout, which occurred mainly in the territories of the test sites and regions adjacent to their boundaries, was considered and studied as an internal affair of the states. The first steps in creating the above-mentioned methodological basis were taken in Russia, where, by now, the methodology of dose estimation in regions of local radioactive fallout has been determined and acknowledged nationwide as a standard document (Federal Committee on Sanitay Epidemiological Control of RF, 1994). It was this methodology that was used for calculations and dose estimation of the exposure of the Altai population from the Semipalatinsk Test Site (STS). (orig./GL)

  12. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    International Nuclear Information System (INIS)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A.; Caloba, L.P.; Mery, D.

    2004-01-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  13. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A. [Federal Univ. of Rio de Janeiro, Dept., of Metallurgical and Materials Engineering, Rio de Janeiro (Brazil); Caloba, L.P. [Federal Univ. of Rio de Janeiro, Dept., of Electrical Engineering, Rio de Janeiro (Brazil); Mery, D. [Pontificia Unversidad Catolica de Chile, Escuela de Ingenieria - DCC, Dept. de Ciencia de la Computacion, Casilla, Santiago (Chile)

    2004-07-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  14. A probabilistic method for testing and estimating selection differences between populations.

    Science.gov (United States)

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  15. Estimating the true accuracy of diagnostic tests for dengue infection using bayesian latent class models.

    Directory of Open Access Journals (Sweden)

    Wirichada Pan-ngum

    Full Text Available Accuracy of rapid diagnostic tests for dengue infection has been repeatedly estimated by comparing those tests with reference assays. We hypothesized that those estimates might be inaccurate if the accuracy of the reference assays is not perfect. Here, we investigated this using statistical modeling.Data from a cohort study of 549 patients suspected of dengue infection presenting at Colombo North Teaching Hospital, Ragama, Sri Lanka, that described the application of our reference assay (a combination of Dengue IgM antibody capture ELISA and IgG antibody capture ELISA and of three rapid diagnostic tests (Panbio NS1 antigen, IgM antibody and IgG antibody rapid immunochromatographic cassette tests were re-evaluated using bayesian latent class models (LCMs. The estimated sensitivity and specificity of the reference assay were 62.0% and 99.6%, respectively. Prevalence of dengue infection (24.3%, and sensitivities and specificities of the Panbio NS1 (45.9% and 97.9%, IgM (54.5% and 95.5% and IgG (62.1% and 84.5% estimated by bayesian LCMs were significantly different from those estimated by assuming that the reference assay was perfect. Sensitivity, specificity, PPV and NPV for a combination of NS1, IgM and IgG cassette tests on admission samples were 87.0%, 82.8%, 62.0% and 95.2%, respectively.Our reference assay is an imperfect gold standard. In our setting, the combination of NS1, IgM and IgG rapid diagnostic tests could be used on admission to rule out dengue infection with a high level of accuracy (NPV 95.2%. Further evaluation of rapid diagnostic tests for dengue infection should include the use of appropriate statistical models.

  16. Estimation of post-test probabilities by residents: Bayesian reasoning versus heuristics?

    Science.gov (United States)

    Hall, Stacey; Phang, Sen Han; Schaefer, Jeffrey P; Ghali, William; Wright, Bruce; McLaughlin, Kevin

    2014-08-01

    Although the process of diagnosing invariably begins with a heuristic, we encourage our learners to support their diagnoses by analytical cognitive processes, such as Bayesian reasoning, in an attempt to mitigate the effects of heuristics on diagnosing. There are, however, limited data on the use ± impact of Bayesian reasoning on the accuracy of disease probability estimates. In this study our objective was to explore whether Internal Medicine residents use a Bayesian process to estimate disease probabilities by comparing their disease probability estimates to literature-derived Bayesian post-test probabilities. We gave 35 Internal Medicine residents four clinical vignettes in the form of a referral letter and asked them to estimate the post-test probability of the target condition in each case. We then compared these to literature-derived probabilities. For each vignette the estimated probability was significantly different from the literature-derived probability. For the two cases with low literature-derived probability our participants significantly overestimated the probability of these target conditions being the correct diagnosis, whereas for the two cases with high literature-derived probability the estimated probability was significantly lower than the calculated value. Our results suggest that residents generate inaccurate post-test probability estimates. Possible explanations for this include ineffective application of Bayesian reasoning, attribute substitution whereby a complex cognitive task is replaced by an easier one (e.g., a heuristic), or systematic rater bias, such as central tendency bias. Further studies are needed to identify the reasons for inaccuracy of disease probability estimates and to explore ways of improving accuracy.

  17. A Review of Material Properties Estimation Using Eddy Current Testing and Capacitor Imaging

    Directory of Open Access Journals (Sweden)

    Mohd. Amri Yunus

    2009-01-01

    Full Text Available he non destructive testing applications based on inductive (eddy current testing and capacitive sensors are widely used for imaging of material properties. The simple structure, safe to use, low cost, non contact application, good response at medium range frequency of the sensors make them preferable to be used in the industries. The aim of this study is to talk about the material properties estimation applications using eddy current testing and capacitive sensing. The basic operations of eddy current testing and capacitive sensing with example of application in the non destructive testing are discussed. Next, the recent advancements of eddy current testing and capacitive testing in imaging technique are presented in this paper.

  18. A summary of estimated doses to members of the public from atmospheric nuclear tests at the Nevada test site

    International Nuclear Information System (INIS)

    Simon, S.L.; Bouville, A.; Luckyanov, N.; Miller, C.W.; Beck, H.L.; Anspaugh, L.R.

    2002-01-01

    This paper discusses estimates of radiation dose to representative members of the public of the United States (U.S.) from atmospheric nuclear tests conducted from 1951 through 1962 at the Nevada Test Site. The estimates provided here summarize five studies conducted over the past two decades. From those studies, an estimate of the average deposition of 137 Cs within each of the more than 3,000 counties across the country has been derived as well as doses to representative persons in each county and to specific subpopulations. The years of the largest contributions to the collective external dose were 1952, 1953, and 1957. Those years accounted for about 70% of the 84,000 person-Gy received by the U.S. public. Irradiation of the thyroid gland of members of the U.S. public was also a consequence of dispersion of radioiodine in the fallout. Thyroid doses varied by location and by birth year. The population weighted thyroid dose for a child born in 1951 and for an adult in 1951 were 30 and 5 mGy, respectively. Maps are provided to show the geographic distribution of 137 Cs as well as the average thyroid dose received in each county from the Nevada tests. (author)

  19. Further Evaluation of Covariate Analysis using Empirical Bayes Estimates in Population Pharmacokinetics: the Perception of Shrinkage and Likelihood Ratio Test.

    Science.gov (United States)

    Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose

    2017-01-01

    Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.

  20. A novel method for estimating soil precompression stress from uniaxial confined compression tests

    DEFF Research Database (Denmark)

    Lamandé, Mathieu; Schjønning, Per; Labouriau, Rodrigo

    2017-01-01

    . Stress-strain curves were obtained by performing uniaxial, confined compression tests on undisturbed soil cores for three soil types at three soil water potentials. The new method performed better than the Gompertz fitting method in estimating precompression stress. The values of precompression stress...... obtained from the new method were linearly related to the maximum stress experienced by the soil samples prior to the uniaxial, confined compression test at each soil condition with a slope close to 1. Precompression stress determined with the new method was not related to soil type or dry bulk density......The concept of precompression stress is used for estimating soil strength of relevance to fieldtraffic. It represents the maximum stress experienced by the soil. The most recently developed fitting method to estimate precompression stress (Gompertz) is based on the assumption of an S-shape stress...

  1. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  2. Accuracy of Herdsmen Reporting versus Serologic Testing for Estimating Foot-and-Mouth Disease Prevalence

    Science.gov (United States)

    Handel, Ian G.; Tanya, Vincent N.; Hamman, Saidou M.; Nfon, Charles; Bergman, Ingrid E.; Malirat, Viviana; Sorensen, Karl J.; Bronsvoort, Barend M. de C.

    2014-01-01

    Herdsman-reported disease prevalence is widely used in veterinary epidemiologic studies, especially for diseases with visible external lesions; however, the accuracy of such reports is rarely validated. Thus, we used latent class analysis in a Bayesian framework to compare sensitivity and specificity of herdsman reporting with virus neutralization testing and use of 3 nonstructural protein ELISAs for estimates of foot-and-mouth disease (FMD) prevalence on the Adamawa plateau of Cameroon in 2000. Herdsman-reported estimates in this FMD-endemic area were comparable to those obtained from serologic testing. To harness to this cost-effective resource of monitoring emerging infectious diseases, we suggest that estimates of the sensitivity and specificity of herdsmen reporting should be done in parallel with serologic surveys of other animal diseases. PMID:25417556

  3. Validity of 20-metre multi stage shuttle run test for estimation of ...

    African Journals Online (AJOL)

    Validity of 20-metre multi stage shuttle run test for estimation of maximum oxygen uptake in indian male university students. P Chatterjee, AK Banerjee, P Debnath, P Bas, B Chatterjee. Abstract. No Abstract. South African Journal for Physical, Health Education, Recreation and DanceVol. 12(4) 2006: pp. 461-467. Full Text:.

  4. Estimating Rates of Fault Insertion and Test Effectiveness in Software Systems

    Science.gov (United States)

    Nikora, A.; Munson, J.

    1998-01-01

    In developing a software system, we would like to estimate the total number of faults inserted into a software system, the residual fault content of that system at any given time, and the efficacy of the testing activity in executing the code containing the newly inserted faults.

  5. Development of 1-Mile Walk Tests to Estimate Aerobic Fitness in Children

    Science.gov (United States)

    Sung, Hoyong; Collier, David N.; DuBose, Katrina D.; Kemble, C. David; Mahar, Matthew T.

    2018-01-01

    To examine the reliability and validity of 1-mile walk tests for estimation of aerobic fitness (VO[subscript 2max]) in 10- to 13-year-old children and to cross-validate previously published equations. Participants (n = 61) walked 1-mile on two different days. Self-reported physical activity, demographic variables, and aerobic fitness were used in…

  6. Efficacy of generic allometric equations for estimating biomass: a test in Japanese natural forests.

    Science.gov (United States)

    Ishihara, Masae I; Utsugi, Hajime; Tanouchi, Hiroyuki; Aiba, Masahiro; Kurokawa, Hiroko; Onoda, Yusuke; Nagano, Masahiro; Umehara, Toru; Ando, Makoto; Miyata, Rie; Hiura, Tsutom

    2015-07-01

    Accurate estimation of tree and forest biomass is key to evaluating forest ecosystem functions and the global carbon cycle. Allometric equations that estimate tree biomass from a set of predictors, such as stem diameter and tree height, are commonly used. Most allometric equations are site specific, usually developed from a small number of trees harvested in a small area, and are either species specific or ignore interspecific differences in allometry. Due to lack of site-specific allometries, local equations are often applied to sites for which they were not originally developed (foreign sites), sometimes leading to large errors in biomass estimates. In this study, we developed generic allometric equations for aboveground biomass and component (stem, branch, leaf, and root) biomass using large, compiled data sets of 1203 harvested trees belonging to 102 species (60 deciduous angiosperm, 32 evergreen angiosperm, and 10 evergreen gymnosperm species) from 70 boreal, temperate, and subtropical natural forests in Japan. The best generic equations provided better biomass estimates than did local equations that were applied to foreign sites. The best generic equations included explanatory variables that represent interspecific differences in allometry in addition to stem diameter, reducing error by 4-12% compared to the generic equations that did not include the interspecific difference. Different explanatory variables were selected for different components. For aboveground and stem biomass, the best generic equations had species-specific wood specific gravity as an explanatory variable. For branch, leaf, and root biomass, the best equations had functional types (deciduous angiosperm, evergreen angiosperm, and evergreen gymnosperm) instead of functional traits (wood specific gravity or leaf mass per area), suggesting importance of other traits in addition to these traits, such as canopy and root architecture. Inclusion of tree height in addition to stem diameter improved

  7. An Estimator of Mutual Information and its Application to Independence Testing

    Directory of Open Access Journals (Sweden)

    Joe Suzuki

    2016-03-01

    Full Text Available This paper proposes a novel estimator of mutual information for discrete and continuous variables. The main feature of this estimator is that it is zero for a large sample size n if and only if the two variables are independent. The estimator can be used to construct several histograms, compute estimations of mutual information, and choose the maximum value. We prove that the number of histograms constructed has an upper bound of O(log n and apply this fact to the search. We compare the performance of the proposed estimator with an estimator of the Hilbert-Schmidt independence criterion (HSIC, though the proposed method is based on the minimum description length (MDL principle and the HSIC provides a statistical test. The proposed method completes the estimation in O(n log n time, whereas the HSIC kernel computation requires O(n3 time. We also present examples in which the HSIC fails to detect independence but the proposed method successfully detects it.

  8. Excess cases of prostate cancer and estimated overdiagnosis associated with PSA testing in East Anglia

    Science.gov (United States)

    Pashayan, N; Powles, J; Brown, C; Duffy, S W

    2006-01-01

    This study aimed to estimate the extent of ‘overdiagnosis' of prostate cancer attributable to prostate-specific antigen (PSA) testing in the Cambridge area between 1996 and 2002. Overdiagnosis was defined conceptually as detection of prostate cancer through PSA testing that otherwise would not have been diagnosed within the patient's lifetime. Records of PSA tests in Addenbrookes Hospital were linked to prostate cancer registrations by NHS number. Differences in prostate cancer registration rates between those receiving and not receiving prediagnosis PSA tests were calculated. The proportion of men aged 40 years or over with a prediagnosis PSA test increased from 1.4 to 5.2% from 1996 to 2002. The rate of diagnosis of prostate cancer was 45% higher (rate ratios (RR)=1.45, 95% confidence intervals (CI) 1.02–2.07) in men with a history of prediagnosis PSA testing. Assuming average lead times of 5 to 10 years, 40–64% of the PSA-detected cases were estimated to be overdiagnosed. In East Anglia, from 1996 to 2000, a 1.6% excess of cases was associated with PSA testing (around a quarter of the 5.3% excess incidence cases observed in East Anglia from 1996 to 2000). Further quantification of the overdiagnosis will result from continued surveillance and from linkage of incidence to testing in other hospitals. PMID:16832417

  9. Conditional estimation of local pooled dispersion parameter in small-sample RNA-Seq data improves differential expression test.

    Science.gov (United States)

    Gim, Jungsoo; Won, Sungho; Park, Taesung

    2016-10-01

    High throughput sequencing technology in transcriptomics studies contribute to the understanding of gene regulation mechanism and its cellular function, but also increases a need for accurate statistical methods to assess quantitative differences between experiments. Many methods have been developed to account for the specifics of count data: non-normality, a dependence of the variance on the mean, and small sample size. Among them, the small number of samples in typical experiments is still a challenge. Here we present a method for differential analysis of count data, using conditional estimation of local pooled dispersion parameters. A comprehensive evaluation of our proposed method in the aspect of differential gene expression analysis using both simulated and real data sets shows that the proposed method is more powerful than other existing methods while controlling the false discovery rates. By introducing conditional estimation of local pooled dispersion parameters, we successfully overcome the limitation of small power and enable a powerful quantitative analysis focused on differential expression test with the small number of samples.

  10. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  11. Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.

    Science.gov (United States)

    Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J

    2009-11-01

    Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.

  12. Estimation of the level of anxiety in rats: differences in results of open-field test, elevated plus-maze test, and Vogel's conflict test.

    Science.gov (United States)

    Sudakov, S K; Nazarova, G A; Alekseeva, E V; Bashkatova, V G

    2013-07-01

    We compared individual anxiety assessed by three standard tests, open-field test, elevated plus-maze test, and Vogel conflict drinking test, in the same animals. No significant correlations between the main anxiety parameters were found in these three experimental models. Groups of animals with high and low anxiety rats were formed by a single parameter and subsequent selection of two extreme groups (10%). It was found that none of the tests could be used for reliable estimation of individual anxiety in rats. The individual anxiety level with high degree of confidence was determined in high-anxiety and low-anxiety rats demonstrating behavioral parameters above and below the mean values in all tests used. Therefore, several tests should be used for evaluation of the individual anxiety or sensitivity to emotional stress.

  13. Using latent class analysis to estimate the test characteristics of the γ-interferon test, the single intradermal comparative tuberculin test and a multiplex immunoassay under Irish conditions

    DEFF Research Database (Denmark)

    Clegg, Tracy A.; Duignan, Anthony; Whelan, Clare

    2011-01-01

    Considerable effort has been devoted to improving the existing diagnostic tests for bovine tuberculosis (single intradermal comparative tuberculin test [SICTT] and ¿-interferon assay [¿-IFN]) and to develop new tests. Previously, the diagnostic characteristics (sensitivity, specificity) have been...... estimated in populations with defined infection status. However, these approaches can be problematic as there may be few herds in Ireland where freedom from infection is guaranteed. We used latent class models to estimate the diagnostic characteristics of existing (SICTT and ¿-IFN) and new (multiplex...... immunoassay [Enferplex-TB]) diagnostic tests under Irish field conditions where true disease status was unknown. The study population consisted of herds recruited in areas with no known TB problems (2197 animals) and herds experiencing a confirmed TB breakdown (2740 animals). A Bayesian model was developed...

  14. Lord's Wald Test for Detecting Dif in Multidimensional Irt Models: A Comparison of Two Estimation Approaches

    Science.gov (United States)

    Lee, Soo; Suh, Youngsuk

    2018-01-01

    Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…

  15. Estimation of radionuclide ingestion: Lessons from dose reconstruction for fallout from the Nevada Test Site

    International Nuclear Information System (INIS)

    Breshears, D.D.; Whicker, F.W.; Kirchner, T.B.; Anspaugh, L.R.

    1994-01-01

    The United States conducted atmospheric testing of nuclear devices at the Nevada Test Site from 1951 through 1963. In 1979 the U.S. Department of Energy established the Off-Site Radiation Exposure Review Project to compile a data base related to health effects from nuclear testing and to reconstruct doses to public residing off of the Nevada Test Site. This project is the most comprehensive dose reconstruction project to date, and, since similar assessments are currently underway at several other locations within and outside the U.S., lessons from ORERP can be valuable. A major component of dose reconstruction is estimation of dose from radionuclide ingestion. The PATHWAY food-chain model was developed to estimate the amount of radionuclides ingested. For agricultural components of the human diet, PATHWAY predicts radionuclide concentrations and quantities ingested. To improve accuracy and model credibility, four components of model analysis were conducted: estimation of uncertainty in model predictions, estimation of sensitivity of model predictions to input parameters, and testing of model predictions against independent data (validation), and comparing predictions from PATHWAY with those from other models. These results identified strengths and weaknesses in the model and aided in establishing the confidence associated with model prediction, which is a critical component risk assessment and dose reconstruction. For fallout from the Nevada Test Site, by far, the largest internal doses were received by the thyroid. However, the predicted number of fatal cancers from ingestion dose was generally much smaller than the number predicted from external dose. The number of fatal cancers predicted from ingestion dose was also orders of magnitude below the normal projected cancer rate. Several lessons were learned during the study that are relevant to other dose reconstruction efforts

  16. Estimation of integrity of cast-iron cask against impact due to free drop test, (1)

    International Nuclear Information System (INIS)

    Itoh, Chihiro

    1988-01-01

    Ductile cast iron is examined to use for shipping and storage cask from a economic point of view. However, ductile cast iron is considered to be a brittle material in general. Therefore, it is very important to estimate the integrity of cast iron cask against brittle failure due to impact load at 9 m drop test and 1 m derop test on to pin. So, the F.E.M. analysis which takes nonlinearity of materials into account and the estimation against brittle failure by the method which is proposed in this report were carried out. From the analysis, it is made clear that critical flaw depth (the minimum depth to initiate the brittle failure) is 21.1 mm and 13.1 mm in the case of 9 m drop test and 1 m drop test on to pin respectively. These flaw depth can be detected by ultrasonic test. Then, the cask is assured against brittle failure due to impact load at 9 m drop test and 1 m drop test on to pin. (author)

  17. Combined use of heat and saline tracer to estimate aquifer properties in a forced gradient test

    Science.gov (United States)

    Colombani, N.; Giambastiani, B. M. S.; Mastrocicco, M.

    2015-06-01

    Usually electrolytic tracers are employed for subsurface characterization, but the interpretation of tracer test data collected by low cost techniques, such as electrical conductivity logging, can be biased by cation exchange reactions. To characterize the aquifer transport properties a saline and heat forced gradient test was employed. The field site, located near Ferrara (Northern Italy), is a well characterized site, which covers an area of 200 m2 and is equipped with a grid of 13 monitoring wells. A two-well (injection and pumping) system was employed to perform the forced gradient test and a straddle packer was installed in the injection well to avoid in-well artificial mixing. The contemporary continuous monitor of hydraulic head, electrical conductivity and temperature within the wells permitted to obtain a robust dataset, which was then used to accurately simulate injection conditions, to calibrate a 3D transient flow and transport model and to obtain aquifer properties at small scale. The transient groundwater flow and solute-heat transport model was built using SEAWAT. The result significance was further investigated by comparing the results with already published column experiments and a natural gradient tracer test performed in the same field. The test procedure shown here can provide a fast and low cost technique to characterize coarse grain aquifer properties, although some limitations can be highlighted, such as the small value of the dispersion coefficient compared to values obtained by natural gradient tracer test, or the fast depletion of heat signal due to high thermal diffusivity.

  18. Test suite for image-based motion estimation of the brain and tongue

    Science.gov (United States)

    Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.

    2017-03-01

    Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an "image synthesis" test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head- brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that

  19. Lead coolant test facility systems design, thermal hydraulic analysis and cost estimate

    Energy Technology Data Exchange (ETDEWEB)

    Khericha, Soli, E-mail: slk2@inel.gov [Battelle Energy Alliance, LLC, Idaho National Laboratory, Idaho Falls, ID 83415 (United States); Harvego, Edwin; Svoboda, John; Evans, Robert [Battelle Energy Alliance, LLC, Idaho National Laboratory, Idaho Falls, ID 83415 (United States); Dalling, Ryan [ExxonMobil Gas and Power Marketing, Houston, TX 77069 (United States)

    2012-01-15

    The Idaho National Laboratory prepared a preliminary technical and functional requirements (T and FR), thermal hydraulic design and cost estimate for a lead coolant test facility. The purpose of this small scale facility is to simulate lead coolant fast reactor (LFR) coolant flow in an open lattice geometry core using seven electrical rods and liquid lead or lead-bismuth eutectic coolant. Based on review of current world lead or lead-bismuth test facilities and research needs listed in the Generation IV Roadmap, five broad areas of requirements were identified as listed below: Bullet Develop and demonstrate feasibility of submerged heat exchanger. Bullet Develop and demonstrate open-lattice flow in electrically heated core. Bullet Develop and demonstrate chemistry control. Bullet Demonstrate safe operation. Bullet Provision for future testing. This paper discusses the preliminary design of systems, thermal hydraulic analysis, and simplified cost estimated. The facility thermal hydraulic design is based on the maximum simulated core power using seven electrical heater rods of 420 kW; average linear heat generation rate of 300 W/cm. The core inlet temperature for liquid lead or Pb/Bi eutectic is 4200 Degree-Sign C. The design includes approximately seventy-five data measurements such as pressure, temperature, and flow rates. The preliminary estimated cost of construction of the facility is $3.7M (in 2006 $). It is also estimated that the facility will require two years to be constructed and ready for operation.

  20. Information content of slug tests for estimating hydraulic properties in realistic, high-conductivity aquifer scenarios

    Science.gov (United States)

    Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya

    2011-06-01

    SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a

  1. A Comparison of the Approaches of Generalizability Theory and Item Response Theory in Estimating the Reliability of Test Scores for Testlet-Composed Tests

    Science.gov (United States)

    Lee, Guemin; Park, In-Yong

    2012-01-01

    Previous assessments of the reliability of test scores for testlet-composed tests have indicated that item-based estimation methods overestimate reliability. This study was designed to address issues related to the extent to which item-based estimation methods overestimate the reliability of test scores composed of testlets and to compare several…

  2. Transmissivity and storage coefficient estimates from slug tests, Naval Air Warfare Center, West Trenton, New Jersey

    Science.gov (United States)

    Fiore, Alex R.

    2014-01-01

    Slug tests were conducted on 56 observation wells open to bedrock at the former Naval Air Warfare Center (NAWC) in West Trenton, New Jersey. Aquifer transmissivity (T) and storage coefficient (S) values for most wells were estimated from slug-test data using the Cooper-Bredehoeft-Papadopulos method. Test data from three wells exhibited fast, underdamped water-level responses and were analyzed with the Butler high-K method. The range of T at NAWC was approximately 0.07 to 10,000 square feet per day. At 11 wells, water levels did not change measurably after 20 minutes following slug insertion; transmissivity at these 11 wells was estimated to be less than 0.07 square feet per day. The range of S was approximately 10-10 to 0.01, the mode being 10-10. Water-level responses for tests at three wells fit poorly to the type curves of both methods, indicating that these methods were not appropriate for adequately estimating T and S from those data.

  3. Testing the Efficacy of Alcohol Labels with Standard Drink Information and National Drinking Guidelines on Consumers' Ability to Estimate Alcohol Consumption.

    Science.gov (United States)

    Hobin, Erin; Vallance, Kate; Zuo, Fei; Stockwell, Tim; Rosella, Laura; Simniceanu, Alice; White, Christine; Hammond, David

    2018-01-01

    Despite the introduction of national drinking guidelines in Canada, there is limited public knowledge of them and low understanding of 'standard drinks (SDs)' which limits the likelihood of guidelines affecting drinking behaviour. This study tests the efficacy of alcohol labels with SD information and Canada's Low-Risk Drinking Guidelines (LRDGs) as compared to %ABV labels on consumers' ability to estimate alcohol intake. It also examines the label size and format that best supports adults' ability to make informed drinking choices. This research consisted of a between-groups experiment (n = 2016) in which participants each viewed one of six labels. Using an online survey, participants viewed an alcohol label and were asked to estimate: (a) the amount in a SD; (b) the number of SDs in an alcohol container and (c) the number of SDs to consume to reach the recommended daily limit in Canada's LRDG. Results indicated that labels with SD and LRDG information facilitated more accurate estimates of alcohol consumption and awareness of safer drinking limits across different beverage types (12.6% to 58.9% increase in accuracy), and labels were strongly supported among the majority (66.2%) of participants. Labels with SD and LRDG information constitute a more efficacious means of supporting accurate estimates of alcohol consumption than %ABV labels, and provide evidence to inform potential changes to alcohol labelling regulations. Further research testing labels in real-world settings is needed. Results indicate that the introduction of enhanced alcohol labels combining standard drink information and national drinking guidelines may be an effective way to improve drinkers' ability to accurately assess alcohol consumption and monitor intake relative to guidelines. Overall support for enhanced labels suggests probable acceptability of introduction at a population level. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.

  4. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    Science.gov (United States)

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  6. A test and re-estimation of Taylor's empirical capacity-reserve relationship

    Science.gov (United States)

    Long, K.R.

    2009-01-01

    In 1977, Taylor proposed a constant elasticity model relating capacity choice in mines to reserves. A test of this model using a very large (n = 1,195) dataset confirms its validity but obtains significantly different estimated values for the model coefficients. Capacity is somewhat inelastic with respect to reserves, with an elasticity of 0.65 estimated for open-pit plus block-cave underground mines and 0.56 for all other underground mines. These new estimates should be useful for capacity determinations as scoping studies and as a starting point for feasibility studies. The results are robust over a wide range of deposit types, deposit sizes, and time, consistent with physical constraints on mine capacity that are largely independent of technology. ?? 2009 International Association for Mathematical Geology.

  7. Development of a Reference Data Set (RDS) for dental age estimation (DAE) and testing of this with a separate Validation Set (VS) in a southern Chinese population.

    Science.gov (United States)

    Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J

    2016-10-01

    Many countries have recently experienced a rapid increase in the demand for forensic age estimates of unaccompanied minors. Hong Kong is a major tourist and business center where there has been an increase in the number of people intercepted with false travel documents. An accurate estimation of age is only possible when a dataset for age estimation that has been derived from the corresponding ethnic population. Thus, the aim of this study was to develop and validate a Reference Data Set (RDS) for dental age estimation for southern Chinese. A total of 2306 subjects were selected from the patient archives of a large dental hospital and the chronological age for each subject was recorded. This age was assigned to each specific stage of dental development for each tooth to create a RDS. To validate this RDS, a further 484 subjects were randomly chosen from the patient archives and their dental age was assessed based on the scores from the RDS. Dental age was estimated using meta-analysis command corresponding to random effects statistical model. Chronological age (CA) and Dental Age (DA) were compared using the paired t-test. The overall difference between the chronological and dental age (CA-DA) was 0.05 years (2.6 weeks) for males and 0.03 years (1.6 weeks) for females. The paired t-test indicated that there was no statistically significant difference between the chronological and dental age (p > 0.05). The validated southern Chinese reference dataset based on dental maturation accurately estimated the chronological age. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  8. ELF-test less accurately identifies liver cirrhosis diagnosed by liver stiffness measurement in non-Asian women with chronic hepatitis B

    NARCIS (Netherlands)

    Harkisoen, S.; Boland, G. J.; van den Hoek, J. A. R.; van Erpecum, K. J.; Hoepelman, A. I. M.; Arends, J. E.

    2014-01-01

    The enhanced liver fibrosis test (ELF-test) has been validated for several hepatic diseases. However, its performance in chronic hepatitis B virus (CHB) infected patients is uncertain. This study investigates the diagnostic value of the ELF test for cirrhosis identified by liver stiffness

  9. Estimation of failure criteria in multivariate sensory shelf life testing using survival analysis.

    Science.gov (United States)

    Giménez, Ana; Gagliardi, Andrés; Ares, Gastón

    2017-09-01

    For most food products, shelf life is determined by changes in their sensory characteristics. A predetermined increase or decrease in the intensity of a sensory characteristic has frequently been used to signal that a product has reached the end of its shelf life. Considering all attributes change simultaneously, the concept of multivariate shelf life allows a single measurement of deterioration that takes into account all these sensory changes at a certain storage time. The aim of the present work was to apply survival analysis to estimate failure criteria in multivariate sensory shelf life testing using two case studies, hamburger buns and orange juice, by modelling the relationship between consumers' rejection of the product and the deterioration index estimated using PCA. In both studies, a panel of 13 trained assessors evaluated the samples using descriptive analysis whereas a panel of 100 consumers answered a "yes" or "no" question regarding intention to buy or consume the product. PC1 explained the great majority of the variance, indicating all sensory characteristics evolved similarly with storage time. Thus, PC1 could be regarded as index of sensory deterioration and a single failure criterion could be estimated through survival analysis for 25 and 50% consumers' rejection. The proposed approach based on multivariate shelf life testing may increase the accuracy of shelf life estimations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD

    Directory of Open Access Journals (Sweden)

    Yuri Gulbin

    2011-05-01

    Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.

  11. Testing Black Market vs. Official PPP: A Pooled Mean Group Estimation Approach

    OpenAIRE

    Goswami, Gour Gobinda; Hossain, Mohammad Zariab

    2013-01-01

    Testing purchasing power parity (PPP) using black market exchange rate data has gained popularity in recent times. It is claimed that black market exchange rate data more often support the PPP than the official exchange rate data. In this study, to assess both the long run stability of exchange rate and the short run dynamics, we employ Pooled Mean Group (PMG) Estimation developed by Pesaran et al. (1999) on eight groups of countries based on different criteria. Using the famous Reinhart and ...

  12. WTA estimates using the method of paired comparison: tests of robustness

    Science.gov (United States)

    Patricia A. Champ; John B. Loomis

    1998-01-01

    The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...

  13. CASPER: Embedding Power Estimation and Hardware-Controlled Power Management in a Cycle-Accurate Micro-Architecture Simulation Platform for Many-Core Multi-Threading Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Arun Ravindran

    2012-02-01

    Full Text Available Despite the promising performance improvement observed in emerging many-core architectures in high performance processors, high power consumption prohibitively affects their use and marketability in the low-energy sectors, such as embedded processors, network processors and application specific instruction processors (ASIPs. While most chip architects design power-efficient processors by finding an optimal power-performance balance in their design, some use sophisticated on-chip autonomous power management units, which dynamically reduce the voltage or frequencies of idle cores and hence extend battery life and reduce operating costs. For large scale designs of many-core processors, a holistic approach integrating both these techniques at different levels of abstraction can potentially achieve maximal power savings. In this paper we present CASPER, a robust instruction trace driven cycle-accurate many-core multi-threading micro-architecture simulation platform where we have incorporated power estimation models of a wide variety of tunable many-core micro-architectural design parameters, thus enabling processor architects to explore a sufficiently large design space and achieve power-efficient designs. Additionally CASPER is designed to accommodate cycle-accurate models of hardware controlled power management units, enabling architects to experiment with and evaluate different autonomous power-saving mechanisms to study the run-time power-performance trade-offs in embedded many-core processors. We have implemented two such techniques in CASPER–Chipwide Dynamic Voltage and Frequency Scaling, and Performance Aware Core-Specific Frequency Scaling, which show average power savings of 35.9% and 26.2% on a baseline 4-core SPARC based architecture respectively. This power saving data accounts for the power consumption of the power management units themselves. The CASPER simulation platform also provides users with complete support of SPARCV9

  14. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  15. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  16. The Selvester QRS Score is more accurate than Q waves and fragmented QRS complexes using the Mason-Likar configuration in estimating infarct volume in patients with ischemic cardiomyopathy.

    Science.gov (United States)

    Carey, Mary G; Luisi, Andrew J; Baldwa, Sunil; Al-Zaiti, Salah; Veneziano, Marc J; deKemp, Robert A; Canty, John M; Fallavollita, James A

    2010-01-01

    Infarct volume independently predicts cardiovascular events. Fragmented QRS complexes (fQRS) may complement Q waves for identifying infarction; however, their utility in advanced coronary disease is unknown. We tested whether fQRS could improve the electrocardiographic prediction of infarct volume by positron emission tomography in 138 patients with ischemic cardiomyopathy (ejection fraction, 0.27 +/- 0.09). Indices of infarction (pathologic Q waves, fQRS, and Selvester QRS Score) were analyzed by blinded observers. In patients with QRS duration less than 120 milliseconds, number of leads with pathologic Q waves (mean, 1.6 +/- 1.7) correlated weakly with infarct volume (r = 0.30, P wave prediction of infarct volume; but Selvester Score was more accurate. Published by Elsevier Inc.

  17. Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.

    Directory of Open Access Journals (Sweden)

    Umair Khalil

    Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.

  18. A statistical characterization of the finger tapping test: modeling, estimation, and applications.

    Science.gov (United States)

    Austin, Daniel; McNames, James; Klein, Krystal; Jimison, Holly; Pavel, Misha

    2015-03-01

    Sensory-motor performance is indicative of both cognitive and physical function. The Halstead-Reitan finger tapping test is a measure of sensory-motor speed commonly used to assess function as part of a neuropsychological evaluation. Despite the widespread use of this test, the underlying motor and cognitive processes driving tapping behavior during the test are not well characterized or understood. This lack of understanding may make clinical inferences from test results about health or disease state less accurate because important aspects of the task such as variability or fatigue are unmeasured. To overcome these limitations, we enhanced the tapper with a sensor that enables us to more fully characterize all the aspects of tapping. This modification enabled us to decompose the tapping performance into six component phases and represent each phase with a set of parameters having clear functional interpretation. This results in a set of 29 total parameters for each trial, including change in tapping over time, and trial-to-trial and tap-to-tap variability. These parameters can be used to more precisely link different aspects of cognition or motor function to tapping behavior. We demonstrate the benefits of this new instrument with a simple hypothesis-driven trial comparing single and dual-task tapping.

  19. Estimation of maximal oxygen uptake without exercise testing in Korean healthy adult workers.

    Science.gov (United States)

    Jang, Tae-Won; Park, Shin-Goo; Kim, Hyoung-Ryoul; Kim, Jung-Man; Hong, Young-Seoub; Kim, Byoung-Gwon

    2012-08-01

    Maximal oxygen uptake is generally accepted as the most valid and reliable index of cardiorespiratory fitness and functional aerobic capacity. The exercise test for measuring maximal oxygen uptake is unsuitable for screening tests in public heath examinations, because of the potential risks of exercise exertion and time demands. We designed this study to determine whether work-related physical activity is a potential predictor of maximal oxygen uptake, and to develop a maximal oxygen uptake equation using a non-exercise regression model for the cardiorespiratory fitness test in Korean adult workers. Study subjects were adult workers of small-sized companies in Korea. Subjects with history of disease such as hypertension, diabetes, asthma and angina were excluded. In total, 217 adult subjects (113 men of 21-55 years old and 104 women of 20-64 years old) were included. Self-report questionnaire survey was conducted on study subjects, and maximal oxygen uptake of each subject was measured with the exercise test. The statistical analysis was carried out to develop an equation for estimating maximal oxygen uptake. The predictors for estimating maximal oxygen uptake included age, gender, body mass index, smoking, leisure-time physical activity and the factors representing work-related physical activity. The work-related physical activity was identified to be a predictor of maximal oxygen uptake. Moreover, the equation showed high validity according to the statistical analysis. The equation for estimating maximal oxygen uptake developed in the present study could be used as a screening test for assessing cardiorespiratory fitness in Korean adult workers.

  20. Tracer test method and process data reconciliation based on VDI 2048. Comparison of two methods for highly accurate determination of feedwater massflow at NPP Beznau

    International Nuclear Information System (INIS)

    Hungerbuehler, T.; Langenstein, M.

    2007-01-01

    The feedwater mass flow is the key measured variable used to determine the thermal reactor output in a nuclear power plant. Usually this parameter is recorded via venturi nozzles of orifice plates. The problem with both principles of measurement, however, is that an accuracy of below 1% cannot be reached. In order to make more accurate statements about the feedwater amounts recirculated in the water-steam cycle, tracer measurements that offer an accuracy of up to 0.2% are used. In the NPP Beznau both methods have been used in parallel to determine the feedwater flow rates in 2004 (unit 1) and 2005 (unit 2). Comparison of the results shows that a high level of agreement is obtained between the results of the reconciliation and the results of the tracer measurements. As a result of the findings of this comparison, a high level of acceptance of process data reconciliation based on VDI 2048 was achieved. (orig.)

  1. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    Science.gov (United States)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is

  2. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    Science.gov (United States)

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  3. A Low-Cost Automated Test Column to Estimate Soil Hydraulic Characteristics in Unsaturated Porous Media

    Directory of Open Access Journals (Sweden)

    J. Salas-García

    2017-01-01

    Full Text Available The estimation of soil hydraulic properties in the vadose zone has some issues, such as accuracy, acquisition time, and cost. In this study, an inexpensive automated test column (ATC was developed to characterize water flow in a homogeneous unsaturated porous medium by the simultaneous estimation of three hydraulic state variables: water content, matric potential, and water flow rates. The ATC includes five electrical resistance probes, two minitensiometers, and a drop counter, which were tested with infiltration tests using the Hydrus-1D model. The results show that calibrations of electrical resistance probes reasonably match with similar studies, and the maximum error of calibration of the tensiometers was 4.6% with respect to the full range. Data measured by the drop counter installed in the ATC exhibited a high consistency with the electrical resistance probes, which provides an independent verification of the model and indicates an evaluation of the water mass balance. The study results show good performance of the model against the infiltration tests, which suggests a robustness of the methodology developed in this study. An extension to the applicability of this system could be successfully used in low-budget projects in large-scale field experiments, which may be correlated with resistivity changes.

  4. Comparison between SAR Soil Moisture Estimates and Hydrological Model Simulations over the Scrivia Test Site

    Directory of Open Access Journals (Sweden)

    Alberto Pistocchi

    2013-10-01

    Full Text Available In this paper, the results of a comparison between the soil moisture content (SMC estimated from C-band SAR, the SMC simulated by a hydrological model, and the SMC measured on ground are presented. The study was carried out in an agricultural test site located in North-west Italy, in the Scrivia river basin. The hydrological model used for the simulations consists of a one-layer soil water balance model, which was found to be able to partially reproduce the soil moisture variability, retaining at the same time simplicity and effectiveness in describing the topsoil. SMC estimates were derived from the application of a retrieval algorithm, based on an Artificial Neural Network approach, to a time series of ENVISAT/ASAR images acquired over the Scrivia test site. The core of the algorithm was represented by a set of ANNs able to deal with the different SAR configurations in terms of polarizations and available ancillary data. In case of crop covered soils, the effect of vegetation was accounted for using NDVI information, or, if available, for the cross-polarized channel. The algorithm results showed some ability in retrieving SMC with RMSE generally <0.04 m3/m3 and very low bias (i.e., <0.01 m3/m3, except for the case of VV polarized SAR images: in this case, the obtained RMSE was somewhat higher than 0.04 m3/m3 (≤0.058 m3/m3. The algorithm was implemented within the framework of an ESA project concerning the development of an operative algorithm for the SMC retrieval from Sentinel-1 data. The algorithm should take into account the GMES requirements of SMC accuracy (≤5% in volume, spatial resolution (≤1 km and timeliness (3 h from observation. The SMC estimated by the SAR algorithm, the SMC estimated by the hydrological model, and the SMC measured on ground were found to be in good agreement. The hydrological model simulations were performed at two soil depths: 30 and 5 cm and showed that the 30 cm simulations indicated, as expected, SMC

  5. [A study of biomechanical method for urine test based on color difference estimation].

    Science.gov (United States)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Zhou, Fengkun

    2008-02-01

    The biochemical analysis of urine is an important inspection and diagnosis method in hospitals. The conventional method of urine analysis covers mainly colorimetric visual appraisement and automation detection, in which the colorimetric visual appraisement technique has been superseded basically, and the automation detection method is adopted in hospital; moreover, the price of urine biochemical analyzer on market is around twenty thousand RMB yuan (Y), which is hard to enter into ordinary families. It is known that computer vision system is not subject to the physiological and psychological influence of person, its appraisement standard is objective and steady. Therefore, according to the color theory, we have established a computer vision system, which can carry through collection, management, display, and appraisement of color difference between the color of standard threshold value and the color of urine test paper after reaction with urine liquid, and then the level of an illness can be judged accurately. In this paper, we introduce the Urine Test Biochemical Analysis method, which is new and can be popularized in families. Experimental result shows that this test method is easy-to-use and cost-effective. It can realize the monitoring of a whole course and can find extensive applications.

  6. Estimation of In Situ Stresses with Hydro-Fracturing Tests and a Statistical Method

    Science.gov (United States)

    Lee, Hikweon; Ong, See Hong

    2018-03-01

    At great depths, where borehole-based field stress measurements such as hydraulic fracturing are challenging due to difficult downhole conditions or prohibitive costs, in situ stresses can be indirectly estimated using wellbore failures such as borehole breakouts and/or drilling-induced tensile failures detected by an image log. As part of such efforts, a statistical method has been developed in which borehole breakouts detected on an image log are used for this purpose (Song et al. in Proceedings on the 7th international symposium on in situ rock stress, 2016; Song and Chang in J Geophys Res Solid Earth 122:4033-4052, 2017). The method employs a grid-searching algorithm in which the least and maximum horizontal principal stresses ( S h and S H) are varied, and the corresponding simulated depth-related breakout width distribution as a function of the breakout angle ( θ B = 90° - half of breakout width) is compared to that observed along the borehole to determine a set of S h and S H having the lowest misfit between them. An important advantage of the method is that S h and S H can be estimated simultaneously in vertical wells. To validate the statistical approach, the method is applied to a vertical hole where a set of field hydraulic fracturing tests have been carried out. The stress estimations using the proposed method were found to be in good agreement with the results interpreted from the hydraulic fracturing test measurements.

  7. Sex Differences in Fluid Reasoning: Manifest and Latent Estimates from the Cognitive Abilities Test

    Directory of Open Access Journals (Sweden)

    Joni M. Lakin

    2014-06-01

    Full Text Available The size and nature of sex differences in cognitive ability continues to be a source of controversy. Conflicting findings result from the selection of measures, samples, and methods used to estimate sex differences. Existing sex differences work on the Cognitive Abilities Test (CogAT has analyzed manifest variables, leaving open questions about sex differences in latent narrow cognitive abilities and the underlying broad ability of fluid reasoning (Gf. This study attempted to address these questions. A confirmatory bifactor model was used to estimate Gf and three residual narrow ability factors (verbal, quantitative, and figural. We found that latent mean differences were larger than manifest estimates for all three narrow abilities. However, mean differences in Gf were trivial, consistent with previous research. In estimating group variances, the Gf factor showed substantially greater male variability (around 20% greater. The narrow abilities varied: verbal reasoning showed small variability differences while quantitative and figural showed substantial differences in variance (up to 60% greater. These results add precision and nuance to the study of the variability and masking hypothesis.

  8. A new technique for testing distribution of knowledge and to estimate sampling sufficiency in ethnobiology studies.

    Science.gov (United States)

    Araújo, Thiago Antonio Sousa; Almeida, Alyson Luiz Santos; Melo, Joabe Gomes; Medeiros, Maria Franco Trindade; Ramos, Marcelo Alves; Silva, Rafael Ricardo Vasconcelos; Almeida, Cecília Fátima Castelo Branco Rangel; Albuquerque, Ulysses Paulino

    2012-03-15

    We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data.

  9. Dual rapid lateral flow immunoassay fingerstick wholeblood testing for syphilis and HIV infections is acceptable and accurate, Port-au-Prince, Haiti.

    Science.gov (United States)

    Bristow, Claire C; Severe, Linda; Pape, Jean William; Javanbakht, Marjan; Lee, Sung-Jae; Comulada, Warren Scott; Klausner, Jeffrey D

    2016-06-18

    Dual rapid tests for HIV and syphilis infections allow for detection of HIV infection and syphilis at the point-of-care. Those tests have been evaluated in laboratory settings and show excellent performance but have not been evaluated in the field. We evaluated the field performance of the SD BIOLINE HIV/Syphilis Duo test in Port-au-Prince, Haiti using whole blood fingerprick specimens. GHESKIO (Haitian Study Group for Kaposi's Sarcoma and Opportunistic Infections) clinic attendees 18 years of age or older were invited to participate. Venipuncture blood specimens were used for reference testing with standard commercially available tests for HIV and syphilis in Haiti. The sensitivity and specificity of the Duo test compared to the reference standard were calculated. The exact binomial method was used to determine 95 % confidence intervals (CI). Of 298 study participants, 237 (79.5 %) were female, of which 49 (20.7 %) were pregnant. For the HIV test component, the sensitivity and specificity were 99.2 % (95 % CI: 95.8 %, 100 %) and 97.0 % (95 % CI: 93.2 %, 99.0 %), respectively; and for the syphilis component were 96.5 % (95 % CI: 91.2 %, 99.0 %) and 90.8 % (95 % CI: 85.7 %, 94.6 %), respectively. In pregnant women, the sensitivity and specificity of the HIV test component were 93.3 % (95 % CI: 68.0 %, 99.8 %) and 94.1 % (95 % CI: 80.3 %, 99.3 %), respectively; and for the syphilis component were 100 % (95 % CI:81.5 %, 100 %) and 96.8 % (95 % CI:83.3 %, 99.9 %), respectively. The Standard Diagnostics BIOLINE HIV/Syphilis Duo dual test performed well in a field setting in Haiti and should be considered for wider use.

  10. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  11. Induction of the acrosome reaction test to in vitro estimate embryo production in Nelore cattle

    Directory of Open Access Journals (Sweden)

    M.Z. Costa

    2010-08-01

    Full Text Available The effectiveness of induction of the acrosome reaction (AR test as a parameter to in vitro estimate embryo production (IVP in Nelore breed and the AR pattern by the Trypan Blue/Giemsa (TB stain were evaluated. Frozen semen samples from ten Nelore bulls were submitted to AR induction and were also evaluated for cleavage and blastocyst rates. The treatments utilized for AR induction were: control (TALP medium, TH (TALP medium + 10μg heparin, TL (TALP medium + 100μg lysophosphatidylcholine and THL (TALP medium + 10μg heparin + 100μg lysophosphatidylcholine. Sperm acrosomal status and viability were evaluated by TB staining at 0 and after 4h incubation at 38°C. The results obtained for AR presented a significant difference (P<0.05 in the percentage of acrosome reacted live sperm after 4h of incubation in the treatments that received heparin. The cleavage and blastocyst rates were 60% and 38% respectively and a significant difference was observed among bulls (P<0.05. It was founded a satisfactory model to estimate the cleavage and blastocyst rates by AR induction test. Therefore, it can be concluded that the induction of the AR test is a valuable tool to predict the IVP in Nelore breed.

  12. Heritability estimates for Mycobacterium avium subspecies paratuberculosis status of German Holstein cows tested by fecal culture.

    Science.gov (United States)

    Küpper, J; Brandt, H; Donat, K; Erhardt, G

    2012-05-01

    The objective of this study was to estimate genetic manifestation of Mycobacterium avium ssp. paratuberculosis (MAP) infection in German Holstein cows. Incorporated into this study were 11,285 German Holstein herd book cows classified as MAP-positive and MAP-negative animals using fecal culture results and originating from 15 farms in Thuringia, Germany involved in a paratuberculosis voluntary control program from 2008 to 2009. The frequency of MAP-positive animals per farm ranged from 2.7 to 67.6%. The fixed effects of farm and lactation number had a highly significant effect on MAP status. An increase in the frequency of positive animals from the first to the third lactation could be observed. Threshold animal and sire models with sire relationship were used as statistical models to estimate genetic parameters. Heritability estimates of fecal culture varied from 0.157 to 0.228. To analyze the effect of prevalence on genetic parameter estimates, the total data set was divided into 2 subsets of data into farms with prevalence rates below 10% and those above 10%. The data set with prevalence above 10% show higher heritability estimates in both models compared with the data set with prevalence below 10%. For all data sets, the sire model shows higher heritabilities than the equivalent animal model. This study demonstrates that genetic variation exists in dairy cattle for paratuberculosis infection susceptibility and furthermore, leads to the conclusion that MAP detection by fecal culture shows a higher genetic background than ELISA test results. In conclusion, fecal culture seems to be a better trait to control the disease, as well as an appropriate feature for further genomic analyses to detect MAP-associated chromosome regions. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Functional loading test for expert estimation of health candidates in cosmonauts (the innovative approach)

    Science.gov (United States)

    Voronkov, Yury; Skedina, Marina; Degterenkova, Natalia; Stepanova, Galina

    Long stay of cosmonauts in conditions of International Space Station demands the increased medical control over their health during selection. Various parameters of cardiovascular system (CVS) undergo significant changes both during adaptation to space flight (period of removing into an orbit), directly under conditions of weightlessness and during readaptation to terrestrial environment. The CVS is sensitive indicator of adaptation reaction of total organism. Therefore much attention is given to the research of CVS regulation, its opportunities to adapt to various stress conditions, detection of pre-nozological changes in mechanisms of its regulation. One of the informative methods for detecting problems in CVS regulation is a postural orthostatic test. This work was designed to research regulation of hemodynamics during passive orthostatic test. 21 practically healthy people in the age from 18 to 36 years old have passed the test. During test the following parameters were registered: 12 Lead ECG and the BP, parameters of a myocardium by means of "CardioVisor-06" (CV) device, and also a condition of microcirculatory bloodstream (MCB) was estimated by means of ultrasonic high-frequency dopplerograph "Minimax-Doppler-K" with 20 MHz sensor. The impedance method of rheoencephalography (REG) by means of the "Encephalan-EEGR-13103" device was used to research a cerebral blood circulation. All subjects had normal parameters of ECG during test. However, during analysis data of CV, REG and MCB high tolerability to the test was observed in 14 test subjects. In other 7 subjects dynamics of parameters during test reflected problems in mechanisms of CVS regulation in its separate parts. Changes in parameters of REG and ultrasound in 4 test subjects reflected a hypotensive reaction. The parameter of a tone of arterioles in carotid and vertebral arteries system decreased for 15,3 % and 55,2 % accordingly. The parameters of MCB: average speed, vascular tone and peripheric

  14. Accurate and fast creep test for viscoelastic fluids using disk-probe-type and quadrupole-arrangement-type electromagnetically spinning systems

    Science.gov (United States)

    Hirano, Taichi; Sakai, Keiji

    2017-07-01

    Viscoelasticity is a unique characteristic of soft materials and describes its dynamic response to mechanical stimulations. A creep test is an experimental method for measuring the strain ratio/rate against an applied stress, thereby assessing the viscoelasticity of the materials. We propose two advanced experimental systems suitable for the creep test, adopting our original electromagnetically spinning (EMS) technique. This technique can apply a constant torque by a noncontact mechanism, thereby allowing more sensitive and rapid measurements. The viscosity and elasticity of a semidilute wormlike micellar solution were determined using two setups, and the consistency between the results was assessed.

  15. Identification of Radiation Effects on Carcinogenic Food Estimated by Ames Test

    International Nuclear Information System (INIS)

    Afifi, M.; Eid, I.; El - Nagdy, M.; Zaher, R.; Abd El-Karem, H.; Abd EL Karim, A.

    2016-01-01

    A major concern in studies related to carcinogenesis is the exposure to the exogenous carcinogens that may occur in food in both natural and polluted human environments. The purpose of the present study is to examine some of food products by Ames test to find out if food products carcinogenic then expose food to gamma radiation to find out the effect of radiation on it as a treatment. In this study, the food samples were examined by Ames test (Salmonella typhimurium mutagenicity test) to find out that a food product could be carcinogenic or highly mutated. Testing of chemicals for mutagenicity is based on the knowledge that a substance which is mutagenic in the bacterium is more likely than not to be a carcinogen in laboratory animals, and thus , by extension, present a risk of cancer to humans. After that food products that showed mutagenicity exposed to gamma radiation at different doses to examine the effect of gamma radiation on food products. This study represent γ radiation effect on carcinogenic food by using Ames test in the following steps: Detect food by Ames test using Salmonella typhimurium strains in which the colony count /plate for each food sample will show if food is slightly mutated or highly mutated or carcinogenic. If food is highly mutated or carcinogenic with high number of colonies /plate, then the carcinogenic food or highly mutated food exposed to different doses of radiation The applied doses in this study were 0, 2.5, 5, and 10 (KGy). Detect the radiation effect on food samples by Ames test after irradiation. The study shows that mutated and carcinogenic food products estimated by Ames test could be treated by irradiation

  16. Estimate of uncertainties correlated and no correlated associated to performance tests of activity meters

    International Nuclear Information System (INIS)

    Sousa, C.H.S.; Teixeira, G.J.; Peixoto, J.G.P.

    2014-01-01

    Activimeters should undergo performance for verifying the functionality tests as technical recommendations. This study estimated the associated expanded uncertainties uncorrelated to the results conducted on three instruments, two detectors with ionization chamber and one with Geiger Mueller tubes. For this we used a standard reference source and screened certified by the National Institute of Technology and Standardization. The methodology of this research was based on the protocols listed in the technical document of the International Atomic Energy Agency. Later two quantities were correlated presenting real correlation and improving expanded uncertainty 3.7%. (author)

  17. Implementation and Test of On-line Embedded Grid Impedance Estimation for PV-inverters

    DEFF Research Database (Denmark)

    Asiminoaei, Lucian; Teodorescu, Remus; Blaabjerg, Frede

    2004-01-01

    to evaluate the grid impedance directly by the PV-inverter, providing a fast and low cost implementation. This principle theoretically provides a correct result of the grid impedance but when using it into the context of PV integration, different implementation issues strongly affect the quality...... of the results. This paper presents a new impedance estimation method including typical implementation problems encountered and it also presents adopted solutions for on-line grid impedance measurement. Practical tests on an existing PV-inverter validate the chosen solutions....

  18. Estimation of carrier mobility at organic semiconductor/insulator interface using an asymmetric capacitive test structure

    Directory of Open Access Journals (Sweden)

    Rajesh Agarwal

    2016-04-01

    Full Text Available Mobility of carriers at the organic/insulator interface is crucial to the performance of organic thin film transistors. The present work describes estimation of mobility using admittance measurements performed on an asymmetric capacitive test structure. Besides the advantage of simplicity, it is shown that at low frequencies, the measured capacitance comes from a large area of channel making the capacitance-voltage characteristics insensitive to contact resistances. 2-D numerical simulation and experimental results obtained with Pentacene/Poly(4-vinyphenol system are presented to illustrate the operation and advantages of the proposed technique.

  19. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    Science.gov (United States)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  20. Testing a Nested Skills Model of the Relations among Invented Spelling, Accurate Spelling, and Word Reading, from Kindergarten to Grade 1

    Science.gov (United States)

    Sénéchal, Monique

    2017-01-01

    The goal was to assess the role of invented spelling to subsequent reading and spelling as proposed by the Nested Skills Model of Early Literacy Acquisition. 107 English-speaking children were tested at the beginning of kindergarten and grade 1, and at the end of grade 1. The findings provided support for the proposed model. First, the role played…

  1. A method for the estimation of dual transmissivities from slug tests

    Science.gov (United States)

    Wolny, Filip; Marciniak, Marek; Kaczmarek, Mariusz

    2018-03-01

    Aquifer homogeneity is usually assumed when interpreting the results of pumping and slug tests, although aquifers are essentially heterogeneous. The aim of this study is to present a method of determining the transmissivities of dual-permeability water-bearing formations based on slug tests such as the pressure-induced permeability test. A bi-exponential rate-of-rise curve is typically observed during many of these tests conducted in heterogeneous formations. The work involved analyzing curves deviating from the exponential rise recorded at the Belchatow Lignite Mine in central Poland, where a significant number of permeability tests have been conducted. In most cases, bi-exponential movement was observed in piezometers with a screen installed in layered sediments, each with a different hydraulic conductivity, or in fissured rock. The possibility to identify the flow properties of these geological formations was analyzed. For each piezometer installed in such formations, a set of two transmissivity values was calculated piecewise based on the interpretation algorithm of the pressure-induced permeability test—one value for the first (steeper) part of the obtained rate-of-rise curve, and a second value for the latter part of the curve. The results of transmissivity estimation for each piezometer are shown. The discussion presents the limitations of the interpretational method and suggests future modeling plans.

  2. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  3. An engineering method for estimating notch-size effect in fatigue tests on steel

    Science.gov (United States)

    Kuhn, Paul; Hardrath, Herbert F

    1952-01-01

    Neuber's proposed method of calculating a practical factor of stress concentration for parts containing notches of arbitrary size depends on the knowledge of a "new material constant" which can be established only indirectly. In this paper, the new constant has been evaluated for a large variety of steels from fatigue tests reported in the literature, attention being confined to stresses near the endurance limit. Reasonably satisfactory results were obtained with the assumption that the constant depends only on the tensile strength of the steel. Even in cases where the notches were cracks of which only the depth was known, reasonably satisfactory agreement was found between calculated and experimental factors. It is also shown that the material constant can be used in an empirical formula to estimate the size effect on unnotched specimens tested in bending fatigue.

  4. A pdf-Free Change Detection Test Based on Density Difference Estimation.

    Science.gov (United States)

    Bu, Li; Alippi, Cesare; Zhao, Dongbin

    2018-02-01

    The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.

  5. In Vitro Tests for Aerosol Deposition. V: Using Realistic Testing to Estimate Variations in Aerosol Properties at the Trachea.

    Science.gov (United States)

    Wei, Xiangyin; Hindle, Michael; Delvadia, Renishkumar R; Byron, Peter R

    2017-10-01

    The dose and aerodynamic particle size distribution (APSD) of drug aerosols' exiting models of the mouth and throat (MT) during a realistic inhalation profile (IP) may be estimated in vitro and designated Total Lung Dose, TLD in vitro , and APSD TLDin vitro , respectively. These aerosol characteristics likely define the drug's regional distribution in the lung. A general method was evaluated to enable the simultaneous determination of TLD in vitro and APSD TLDin vitro for budesonide aerosols' exiting small, medium and large VCU-MT models. Following calibration of the modified next generation pharmaceutical impactor (NGI) at 140 L/min, variations in aerosol dose and size exiting MT were determined from Budelin ® Novolizer ® across the IPs reported by Newman et al., who assessed drug deposition from this inhaler by scintigraphy. Values for TLD in vitro from the test inhaler determined by the general method were found to be statistically comparable to those using a filter capture method. Using new stage cutoffs determined by calibration of the modified NGI at 140 L/min, APSD TLDin vitro profiles and mass median aerodynamic diameters at the MT exit (MMAD TLDin vitro ) were determined as functions of MT geometric size across Newman's IPs. The range of mean values (n ≥ 5) for TLD in vitro and MMAD TLDin vitro for this inhaler extended from 6.2 to 103.0 μg (3.1%-51.5% of label claim) and from 1.7 to 3.6 μm, respectively. The method enables reliable determination of TLD in vitro and APSD TLDin vitro for aerosols likely to enter the trachea of test subjects in the clinic. By simulating realistic IPs and testing in different MT models, the effects of major variables on TLD in vitro and APSD TLDin vitro may be studied using the general method described in this study.

  6. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    International Nuclear Information System (INIS)

    Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa

    2011-01-01

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  7. Population-based Tay-Sachs screening among Ashkenazi Jewish young adults in the 21st century: Hexosaminidase A enzyme assay is essential for accurate testing.

    Science.gov (United States)

    Schneider, Adele; Nakagawa, Sachiko; Keep, Rosanne; Dorsainville, Darnelle; Charrow, Joel; Aleck, Kirk; Hoffman, Jodi; Minkoff, Sherman; Finegold, David; Sun, Wei; Spencer, Andrew; Lebow, Johannah; Zhan, Jie; Apfelroth, Stephen; Schreiber-Agus, Nicole; Gross, Susan

    2009-11-01

    Tay-Sachs disease (TSD) carrier screening, initiated in the 1970s, has reduced the birth-rate of Ashkenazi Jews with TSD worldwide by 90%. Recently, several nationwide programs have been established that provide carrier screening for the updated panel of Jewish genetic diseases on college campuses and in Jewish community settings. The goals of this study were to determine the performance characteristics of clinical TSD testing in college- and community-based screening programs and to determine if molecular testing alone is adequate in those settings. Clinical data for TSD testing were retrospectively anonymized and subsequently analyzed for 1,036 individuals who participated in these programs. The performance characteristics of the serum and the platelet Hexosaminidase assays were compared, and also correlated with the results of targeted DNA analysis. The serum assay identified 29 carriers and the platelet assay identified 35 carriers for carrier rates of 1/36 and 1/29, respectively. One hundred sixty-nine samples (16.3%) were inconclusive by serum assay in marked contrast to four inconclusive samples (0.4%) by the platelet assay. Molecular analysis alone would have missed four of the 35 carriers detected by the platelet assay, yielding a false negative rate of 11.4% with a sensitivity of 88.6%. Based on the results of this study, platelet assay was superior to serum with a minimal inconclusive rate. Due to changing demographics of the Ashkenazi Jewish population, molecular testing alone in the setting of broad-based population screening programs is not sufficient, and biochemical analysis should be the assay of choice. Copyright 2009 Wiley-Liss, Inc.

  8. Estimation of genetic parameters for test day records of dairy traits in the first three lactations

    Directory of Open Access Journals (Sweden)

    Ducrocq Vincent

    2005-05-01

    Full Text Available Abstract Application of test-day models for the genetic evaluation of dairy populations requires the solution of large mixed model equations. The size of the (covariance matrices required with such models can be reduced through the use of its first eigenvectors. Here, the first two eigenvectors of (covariance matrices estimated for dairy traits in first lactation were used as covariables to jointly estimate genetic parameters of the first three lactations. These eigenvectors appear to be similar across traits and have a biological interpretation, one being related to the level of production and the other to persistency. Furthermore, they explain more than 95% of the total genetic variation. Variances and heritabilities obtained with this model were consistent with previous studies. High correlations were found among production levels in different lactations. Persistency measures were less correlated. Genetic correlations between second and third lactations were close to one, indicating that these can be considered as the same trait. Genetic correlations within lactation were high except between extreme parts of the lactation. This study shows that the use of eigenvectors can reduce the rank of (covariance matrices for the test-day model and can provide consistent genetic parameters.

  9. Forensic individual age estimation with DNA: From initial approaches to methylation tests.

    Science.gov (United States)

    Freire-Aradas, A; Phillips, C; Lareu, M V

    2017-07-01

    Individual age estimation is a key factor in forensic science analysis that can provide very useful information applicable to criminal, legal, and anthropological investigations. Forensic age inference was initially based on morphological inspection or radiography and only later began to adopt molecular approaches. However, a lack of accuracy or technical problems hampered the introduction of these DNA-based methodologies in casework analysis. A turning point occurred when the epigenetic signature of DNA methylation was observed to gradually change during an individual´s lifespan. In the last four years, the number of publications reporting DNA methylation age-correlated changes has gradually risen and the forensic community now has a range of age methylation tests applicable to forensic casework. Most forensic age predictor models have been developed based on blood DNA samples, but additional tissues are now also being explored. This review assesses the most widely adopted genes harboring methylation sites, detection technologies, statistical age-predictive analyses, and potential causes of variation in age estimates. Despite the need for further work to improve predictive accuracy and establishing a broader range of tissues for which tests can analyze the most appropriate methylation sites, several forensic age predictors have now been reported that provide consistency in their prediction accuracies (predictive error of ±4 years); this makes them compelling tools with the potential to contribute key information to help guide criminal investigations. Copyright © 2017 Central Police University.

  10. Validation of differential gene expression algorithms: Application comparing fold-change estimation to hypothesis testing

    Directory of Open Access Journals (Sweden)

    Bickel David R

    2010-01-01

    performance. The posterior predictive assessment corroborates these findings. Conclusions Algorithms for detecting differential gene expression may be compared by estimating each algorithm's error in predicting expression ratios, whether such ratios are defined across microarray channels or between two independent groups. According to two distinct estimators of prediction error, algorithms using hierarchical models outperform the other algorithms of the study. The fact that fold-change shrinkage performed as well as conventional model selection criteria calls for investigating algorithms that combine the strengths of significance testing and fold-change estimation.

  11. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  12. Accuracy of a Classical Test Theory-Based Procedure for Estimating the Reliability of a Multistage Test. Research Report. ETS RR-17-02

    Science.gov (United States)

    Kim, Sooyeon; Livingston, Samuel A.

    2017-01-01

    The purpose of this simulation study was to assess the accuracy of a classical test theory (CTT)-based procedure for estimating the alternate-forms reliability of scores on a multistage test (MST) having 3 stages. We generated item difficulty and discrimination parameters for 10 parallel, nonoverlapping forms of the complete 3-stage test and…

  13. Distribution of base rock depth estimated from Rayleigh wave measurement by forced vibration tests

    International Nuclear Information System (INIS)

    Hiroshi Hibino; Toshiro Maeda; Chiaki Yoshimura; Yasuo Uchiyama

    2005-01-01

    This paper shows an application of Rayleigh wave methods to a real site, which was performed to determine spatial distribution of base rock depth from the ground surface. At a certain site in Sagami Plain in Japan, the base rock depth from surface is assumed to be distributed up to 10 m according to boring investigation. Possible accuracy of the base rock depth distribution has been needed for the pile design and construction. In order to measure Rayleigh wave phase velocity, forced vibration tests were conducted with a 500 N vertical shaker and linear arrays of three vertical sensors situated at several points in two zones around the edges of the site. Then, inversion analysis was carried out for soil profile by genetic algorithm, simulating measured Rayleigh wave phase velocity with the computed counterpart. Distribution of the base rock depth inverted from the analysis was consistent with the roughly estimated inclination of the base rock obtained from the boring tests, that is, the base rock is shallow around edge of the site and gradually inclines towards the center of the site. By the inversion analysis, the depth of base rock was determined as from 5 m to 6 m in the edge of the site, 10 m in the center of the site. The determined distribution of the base rock depth by this method showed good agreement on most of the points where boring investigation were performed. As a result, it was confirmed that the forced vibration tests on the ground by Rayleigh wave methods can be useful as the practical technique for estimating surface soil profiles to a depth of up to 10 m. (authors)

  14. Combining multiple hypothesis testing and affinity propagation clustering leads to accurate, robust and sample size independent classification on gene expression data

    Directory of Open Access Journals (Sweden)

    Sakellariou Argiris

    2012-10-01

    Full Text Available Abstract Background A feature selection method in microarray gene expression data should be independent of platform, disease and dataset size. Our hypothesis is that among the statistically significant ranked genes in a gene list, there should be clusters of genes that share similar biological functions related to the investigated disease. Thus, instead of keeping N top ranked genes, it would be more appropriate to define and keep a number of gene cluster exemplars. Results We propose a hybrid FS method (mAP-KL, which combines multiple hypothesis testing and affinity propagation (AP-clustering algorithm along with the Krzanowski & Lai cluster quality index, to select a small yet informative subset of genes. We applied mAP-KL on real microarray data, as well as on simulated data, and compared its performance against 13 other feature selection approaches. Across a variety of diseases and number of samples, mAP-KL presents competitive classification results, particularly in neuromuscular diseases, where its overall AUC score was 0.91. Furthermore, mAP-KL generates concise yet biologically relevant and informative N-gene expression signatures, which can serve as a valuable tool for diagnostic and prognostic purposes, as well as a source of potential disease biomarkers in a broad range of diseases. Conclusions mAP-KL is a data-driven and classifier-independent hybrid feature selection method, which applies to any disease classification problem based on microarray data, regardless of the available samples. Combining multiple hypothesis testing and AP leads to subsets of genes, which classify unknown samples from both, small and large patient cohorts with high accuracy.

  15. MCPerm: a Monte Carlo permutation method for accurately correcting the multiple testing in a meta-analysis of genetic association studies.

    Directory of Open Access Journals (Sweden)

    Yongshuai Jiang

    Full Text Available Traditional permutation (TradPerm tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1 MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2 Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3 MCPerm had almost exactly the same permutation P-values as TradPerm (r = 0.999; P<2.2e-16; (4 The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN: http://cran.r-project.org/web/packages/MCPerm/index.html.

  16. Possibility of use of plat test systems for estimation of degree risk at radiation influence

    International Nuclear Information System (INIS)

    Gogebashvili, M.E.; Ivanishvili, N.I.

    2010-01-01

    . The biological model in essence is a version of expert judgments of an estimation of risk, at radiating influence. Thus from positions of practical application of this test system to become important and that, how much this influence is modified by various concomitant factors. In whole, as a result of the spent researches it is shown, that the given model can serve as convenient test system at studying of the remote effects of radiation and definition of degree of risk at their formation.

  17. Considerations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval.

    Science.gov (United States)

    Raiche, Gilles; Blais, Jean-Guy

    2009-01-01

    In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.

  18. Testing an inversion method for estimating electron energy fluxes from all-sky camera images

    Directory of Open Access Journals (Sweden)

    N. Partamies

    2004-06-01

    Full Text Available An inversion method for reconstructing the precipitating electron energy flux from a set of multi-wavelength digital all-sky camera (ASC images has recently been developed by tomografia. Preliminary tests suggested that the inversion is able to reconstruct the position and energy characteristics of the aurora with reasonable accuracy. This study carries out a thorough testing of the method and a few improvements for its emission physics equations. We compared the precipitating electron energy fluxes as estimated by the inversion method to the energy flux data recorded by the Defense Meteorological Satellite Program (DMSP satellites during four passes over auroral structures. When the aurorae appear very close to the local zenith, the fluxes inverted from the blue (427.8nm filtered ASC images or blue and green line (557.7nm images together give the best agreement with the measured flux values. The fluxes inverted from green line images alone are clearly larger than the measured ones. Closer to the horizon the quality of the inversion results from blue images deteriorate to the level of the ones from green images. In addition to the satellite data, the precipitating electron energy fluxes were estimated from the electron density measurements by the EISCAT Svalbard Radar (ESR. These energy flux values were compared to the ones of the inversion method applied to over 100 ASC images recorded at the nearby ASC station in Longyearbyen. The energy fluxes deduced from these two types of data are in general of the same order of magnitude. In 35% of all of the blue and green image inversions the relative errors were less than 50% and in 90% of the blue and green image inversions less than 100%. This kind of systematic testing of the inversion method is the first step toward using all-sky camera images in the way in which global UV images have recently been used to estimate the energy fluxes. The advantages of ASCs, compared to the space-born imagers, are

  19. An accurate and affordable test for the rapid diagnosis of sickle cell disease could revolutionize the outlook for affected children born in resource-limited settings.

    Science.gov (United States)

    Williams, Thomas N

    2015-09-23

    Each year, at least 280,000 children are born with sickle cell disease (SCD) in resource-limited settings. For cost, logistic and political reasons, the availability of SCD testing is limited in such settings and consequently 50-90 % of affected children die undiagnosed before their fifth birthday. The recent development of a point of care method for the diagnosis of SCD - the Sickle SCAN™ device - could afford such children the prompt access to appropriate services that has transformed the outlook for affected children in resource-rich areas. In research published in BMC Medicine, Kanter and colleagues describe a small but carefully conducted study involving 208 children and adults, in which they found that by using Sickle SCAN™ it was possible to diagnose the common forms of SCD with 99 % sensitivity and 99 % specificity, in under 5 minutes. If repeatable both in newborn babies and under real-life conditions, and if marketed at an affordable price, Sickle SCAN™ could revolutionize the survival prospects for children born with SCD in resource-limited areas.Please see related article: http://dx.doi.org/10.1186/s12916-015-0473-6.

  20. The maximum Number of parameters for the Hausman Test When the Estimators are from Different Sets of Equations

    NARCIS (Netherlands)

    K. Nawata (Kazumitsu); M.J. McAleer (Michael)

    2013-01-01

    markdownabstract__Abstract__ Hausman (1978) developed a widely-used model specification test that has passed the test of time. The test is based on two estimators, one being consistent under the null hypothesis but inconsistent under the alternative, and the other being consistent under both the

  1. The Maximum Number of Parameters for the Hausman Test when the Estimators are from Different Sets of Equations

    NARCIS (Netherlands)

    K. Nawata (Kazumitsu); M.J. McAleer (Michael)

    2013-01-01

    markdownabstract__Abstract__ Hausman (1978) developed a widely-used model specification test that has passed the test of time. The test is based on two estimators, one being consistent under the null hypothesis but inconsistent under the alternative, and the other being consistent under both the

  2. Numerical model for the thermal yield estimation of unglazed photovoltaic-thermal collectors using indoor solar simulator testing

    NARCIS (Netherlands)

    Katiyar, M.; van Balkom, M.W.; Rindt, C.C.M.; de Keizer, C.; Zondag, H.A.

    2017-01-01

    It is a common practice to test solar thermal and photovoltaic-thermal (PVT) collectors outdoors. This requires testing over several weeks to account for different weather conditions encountered throughout the year, which is costly and time consuming. The outcome of these tests is an estimation of

  3. Estimating prevalence and diagnostic test characteristics of bovine cysticercosis in Belgium in the absence of a 'gold standard' reference test using a Bayesian approach.

    Science.gov (United States)

    Jansen, Famke; Dorny, Pierre; Gabriël, Sarah; Eichenberger, Ramon Marc; Berkvens, Dirk

    2018-04-30

    A Bayesian model was developed to estimate values for the prevalence and diagnostic test characteristics of bovine cysticercosis (Taenia saginata) by combining results of four imperfect tests. Samples of 612 bovine carcases that were found negative for cysticercosis during routine meat inspection collected at three Belgian slaughterhouses, underwent enhanced meat inspection (additional incisions in the heart), dissection of the predilection sites, B158/B60 Ag-ELISA and ES Ab-ELISA. This Bayesian approach allows for the combination of prior expert opinion with experimental data to estimate the true prevalence of bovine cysticercosis in the absence of a gold standard test. A first model (based on a multinomial distribution and including all possible interactions between the individual tests) required estimation of 31 parameters, while only allowing for 15 parameters to be estimated. Including prior expert information about specificity and sensitivity resulted in an optimal model with a reduction of the number of parameters to be estimated to 8. The estimated bovine cysticercosis prevalence was 33.9% (95% credibility interval: 27.7-44.4%), while apparent prevalence based on meat inspection is only 0.23%. The test performances were estimated as follows (sensitivity (Se) - specificity (Sp)): enhanced meat inspection (Se 2.87% - Sp 100%), dissection of predilection sites (Se 69.8% - Sp 100%), Ag-ELISA (Se 26.9% - Sp 99.4%), Ab-ELISA (Se 13.8% - Sp 92.9%). Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Estimation of Genetic Parameters for First Lactation Monthly Test-day Milk Yields using Random Regression Test Day Model in Karan Fries Cattle

    Directory of Open Access Journals (Sweden)

    Ajay Singh

    2016-06-01

    Full Text Available A single trait linear mixed random regression test-day model was applied for the first time for analyzing the first lactation monthly test-day milk yield records in Karan Fries cattle. The test-day milk yield data was modeled using a random regression model (RRM considering different order of Legendre polynomial for the additive genetic effect (4th order and the permanent environmental effect (5th order. Data pertaining to 1,583 lactation records spread over a period of 30 years were recorded and analyzed in the study. The variance component, heritability and genetic correlations among test-day milk yields were estimated using RRM. RRM heritability estimates of test-day milk yield varied from 0.11 to 0.22 in different test-day records. The estimates of genetic correlations between different test-day milk yields ranged 0.01 (test-day 1 [TD-1] and TD-11 to 0.99 (TD-4 and TD-5. The magnitudes of genetic correlations between test-day milk yields decreased as the interval between test-days increased and adjacent test-day had higher correlations. Additive genetic and permanent environment variances were higher for test-day milk yields at both ends of lactation. The residual variance was observed to be lower than the permanent environment variance for all the test-day milk yields.

  5. Estimation of transfused red cell survival using an enzyme-linked antiglobulin test

    International Nuclear Information System (INIS)

    Kickler, T.S.; Smith, B.; Bell, W.; Drew, H.; Baldwin, M.; Ness, P.M.

    1985-01-01

    An enzyme-linked antiglobulin test (ELAT) method was developed to estimate survival of transfused red cells. This procedure is based on a principle analogous to that of the Ashby technique were antigenically distinct red cells are transfused and their survival studied. The authors compared the ELAT survival to the 51 Chromium method ( 51 Cr) in four patients. Three patients with hypoproliferative anemias showed T 1/2 by ELAT of 17.5, 18, and 17 days versus 18.5, 20, and 19 days by the 51 Cr method. A fourth patient with traumatic cardiac hemolysis had two studies performed. In this case, the ELAT showed a T 1/2 of 10 and 8.1 days while 51 Cr T 1/2 values were 11 and 10.5 days. The ELAT method for measuring red cell survival yielded data which agreed closely with the results of the 51 Cr method. Although 51 Cr is the accepted method for red cell survival, the ELAT method can be used to estimate transfused red cell survival

  6. Orion Exploration Flight Test 1 (EFT-1) Best Estimated Trajectory Development

    Science.gov (United States)

    Holt, Greg N.; Brown, Aaron

    2016-01-01

    The Orion Exploration Flight Test 1 (EFT-1) mission successfully flew on Dec 5, 2014 atop a Delta IV Heavy launch vehicle. The goal of Orions maiden flight was to stress the system by placing an uncrewed vehicle on a high-energy trajectory replicating conditions similar to those that would be experienced when returning from an asteroid or a lunar mission. The Orion navigation team combined all trajectory data from the mission into a Best Estimated Trajectory (BET) product. There were significant challenges in data reconstruction and many lessons were learned for future missions. The team used an estimation filter incorporating radar tracking, onboard sensors (Global Positioning System and Inertial Measurement Unit), and day-of-flight weather balloons to evaluate the true trajectory flown by Orion. Data was published for the entire Orion EFT-1 flight, plus objects jettisoned during entry such as the Forward Bay Cover. The BET customers include approximately 20 disciplines within Orion who will use the information for evaluating vehicle performance and influencing future design decisions.

  7. A feasibility test to estimate the duration of phytoextraction of heavy metals from polluted soils.

    Science.gov (United States)

    Japenga, J; Koopmans, G F; Song, J; Römkens, P F A M

    2007-01-01

    The practical applicability of heavy metal (HM) phytoextraction depends heavily on its duration. Phytoextraction duration is the main cost factorfor phytoextraction, both referring to recurring economic costs during phytoextraction and to the cost of the soil having no economic value during phytoextraction. An experiment is described here, which is meant as a preliminary feasibility test before starting a phytoextraction scheme in practice, to obtain a more realistic estimate of the phytoextraction duration of a specific HM-polluted soil. In the experiment, HM-polluted soil is mixed at different ratios with unpolluted soil of comparable composition to mimic the gradual decrease of the HM content in the target HM-polluted soil during phytoextraction. After equilibrating the soil mixtures, one cropping cycle is carried out with the plant species of interest. At harvest, the adsorbed HM contents in the soil and the HM contents in the plant shoots are determined. The adsorbed HM contents in the soil are then related to the HM contents in the plant shoots by a log-log linear relationship that can then be used to estimate the phytoextraction duration of a specific HM-polluted soil. This article describes and evaluates the merits of such a feasibility experiment. Potential drawbacks regarding the accuracy of the described approach are discussed and a greenhouse-field extrapolation procedure is proposed.

  8. Estimation of cytogenetic risk in the process of non-destructive testing of welds

    Energy Technology Data Exchange (ETDEWEB)

    Fucic, A; Garaj-Vrhovac, V; Kubelka, D [Inst. for Medical Research and Occupational Health, Zagreb (Croatia); Novakovic, M [Ecotec, Zagreb (Croatia)

    1997-12-31

    The estimation of dose based on chromosomal aberration analyzis is a reliable and generally accepted method, and it indicates genome damages earlier than any other method used in medicine. However, according to available literature data it could be seen that in the cases of overexposure of radiographers detected by film dosimeter only skin changes are quite often diagnosed even without heamatological analyzis. Since no biodosimetrical study so far provides data on genome damages of radiographers caused by combined exposure to gamma irradiation and ultrasound the aim of this study was to compare the effects of the exposure to ionizing radiation alone and combined with application of ultrasound during the process of weld testing. It can be concluded that in cases of combined occupational exposure estimation of dose received by radiographers using film dosimetry should be accompanied by cytogenetic monitoring because personal dosimeter for ultrasound has not been constructed yet. In order to minimize health risk biomonitoring can detect possible synergistic action of both ultrasound and ionizing radiation which is not measurable by any physical method. (author).

  9. Estimation of axial diffusion processes by analog Monte-Carlo: theory, tests and examples

    International Nuclear Information System (INIS)

    Milgram, M.S.

    1997-01-01

    With the advent of fast, reasonably inexpensive computer hardware, it has become possible to follow the histories of several million particles and tally quantities such as currents and fluxes in a finite reactor region using analog Monte-Carlo. Here use is made of this new capability to demonstrate that it is possible to test various approximations that cumulatively are known as the axial diffusion approximation in a realistic, heterogenous reactor lattice cell. From this, it proves possible to extract excellent estimates of the homogenized diffusion coefficient in few energy groups and lattice sub-regions for further comparison with deterministic methods of deriving the same quantity. The breakdown of the diffusion approximation near the endpoints of the axial lattice cell, as well as in the moderator at certain energies, can be observed. (Author)

  10. Reliability of using nondestructive tests to estimate compressive strength of building stones and bricks

    Directory of Open Access Journals (Sweden)

    Ali Abd Elhakam Aliabdo

    2012-09-01

    Full Text Available This study aims to investigate the relationships between Schmidt hardness rebound number (RN and ultrasonic pulse velocity (UPV versus compressive strength (fc of stones and bricks. Four types of rocks (marble, pink lime stone, white lime stone and basalt and two types of burned bricks and lime-sand bricks were studied. Linear and non-linear models were proposed. High correlations were found between RN and UPV versus compressive strength. Validation of proposed models was assessed using other specimens for each material. Linear models for each material showed good correlations than non-linear models. General model between RN and compressive strength of tested stones and bricks showed a high correlation with regression coefficient R2 value of 0.94. Estimation of compressive strength for the studied stones and bricks using their rebound number and ultrasonic pulse velocity in a combined method was generally more reliable than using rebound number or ultrasonic pulse velocity only.

  11. A test of the citrate method of PMI estimation from skeletal remains.

    Science.gov (United States)

    Wilson, Sarah J; Christensen, Angi M

    2017-01-01

    Citrate content in bone has been shown to be associated with the postmortem interval (PMI), with citrate decreasing after death as a function of time. Here we test this method using porcine ribs for the period of 1-165days after death, and also assess citrate content and variation from samples placed into two different postmortem environments (terrestrial and aquatic). Higher citrate variation, lower citrate recovery, and a weaker association with time were found in this study as compared to others. Citrate content, however, was found to decrease with increasing PMI, and the method was found to be easy and inexpensive to apply. No significant differences were found in citrate loss between terrestrial and aquatic environments. Although more research is needed, citrate content appears to be a promising new approach in estimating PMI from skeletal remains. Published by Elsevier B.V.

  12. Defect Shape Recovering by Parameter Estimation Arising in Eddy Current Testing

    International Nuclear Information System (INIS)

    Kojima, Fumio

    2003-01-01

    This paper is concerned with a computational method for recovering a crack shape of steam generator tubes of nuclear plants. Problems on the shape identification are discussed arising in the characterization of a structural defect in a conductor using data of eddy current inspection. A surface defect on the generator tube ran be detected as a probe impedance trajectory by scanning a pancake type coil. First, a mathematical model of the inspection process is derived from the Maxwell's equation. Second, the input and output relation is given by the approximate model by virtue of the hybrid use of the finite element and boundary element method. In that model, the crack shape is characterized by the unknown coefficients of the B-spline function which approximates the crack shape geometry. Finally, a parameter estimation technique is proposed for recovering the crack shape using data from the probe coil. The computational experiments were successfully tested with the laboratory data

  13. An electromyographic-based test for estimating neuromuscular fatigue during incremental treadmill running

    International Nuclear Information System (INIS)

    Camic, Clayton L; Kovacs, Attila J; Hill, Ethan C; Calantoni, Austin M; Yemm, Allison J; Enquist, Evan A; VanDusseldorp, Trisha A

    2014-01-01

    The purposes of the present study were two fold: (1) to determine if the model used for estimating the physical working capacity at the fatigue threshold (PWC FT ) from electromyographic (EMG) amplitude data during incremental cycle ergometry could be applied to treadmill running to derive a new neuromuscular fatigue threshold for running, and (2) to compare the running velocities associated with the PWC FT , ventilatory threshold (VT), and respiratory compensation point (RCP). Fifteen college-aged subjects (21.5  ±  1.3 y, 68.7  ±  10.5 kg, 175.9  ±  6.7 cm) performed an incremental treadmill test to exhaustion with bipolar surface EMG signals recorded from the vastus lateralis. There were significant (p < 0.05) mean differences in running velocities between the VT (11.3  ±  1.3 km h −1 ) and PWC FT (14.0  ±  2.3 km h −1 ), VT and RCP (14.0  ±  1.8 km h −1 ), but not the PWC FT and RCP. The findings of the present study indicated that the PWC FT model could be applied to a single continuous, incremental treadmill test to estimate the maximal running velocity that can be maintained prior to the onset of neuromuscular fatigue. In addition, these findings suggested that the PWC FT , like the RCP, may be used to differentiate the heavy from severe domains of exercise intensity. (paper)

  14. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  15. Possibility of use of plant test systems for estimation of degree risk at radiation influence

    International Nuclear Information System (INIS)

    Gogebashvili, M.E; Ivanishvili, N.I.

    2011-01-01

    (in some cases, frequency of occurrence) negative event (irradiation), a damage (number of deviations from norm or deadly outcomes) at event realization. The biological model offered by us in essence is a version of expert judgments of an estimation of risk, at radiating influence. Thus from positions of practical application of this test system to become important and that, how much this influence is modified by various concomitant factors.In whole, as a result of the spent researches it is shown, that the given model can serve as convenient test system at studying of the remote effects of radiation and definition of degree of risk at their formation.

  16. Teleseismic Lg of Semipalatinsk and Novaya Zemlya Nuclear Explosions Recorded by the GRF (Gräfenberg) Array: Comparison with Regional Lg (BRV) and their Potential for Accurate Yield Estimation

    Science.gov (United States)

    Schlittenhardt, J.

    - A comparison of regional and teleseismic log rms (root-mean-square) Lg amplitude measurements have been made for 14 underground nuclear explosions from the East Kazakh test site recorded both by the BRV (Borovoye) station in Kazakhstan and the GRF (Gräfenberg) array in Germany. The log rms Lg amplitudes observed at the BRV regional station at a distance of 690km and at the teleseismic GRF array at a distance exceeding 4700km show very similar relative values (standard deviation 0.048 magnitude units) for underground explosions of different sizes at the Shagan River test site. This result as well as the comparison of BRV rms Lg magnitudes (which were calculated from the log rms amplitudes using an appropriate calibration) with magnitude determinations for P waves of global seismic networks (standard deviation 0.054 magnitude units) point to a high precision in estimating the relative source sizes of explosions from Lg-based single station data. Similar results were also obtained by other investigators (Patton, 1988; Ringdaletal., 1992) using Lg data from different stations at different distances.Additionally, GRF log rms Lg and P-coda amplitude measurements were made for a larger data set from Novaya Zemlya and East Kazakh explosions, which were supplemented with mb(Lg) amplitude measurements using a modified version of Nuttli's (1973, 1986a) method. From this test of the relative performance of the three different magnitude scales, it was found that the Lg and P-coda based magnitudes performed equally well, whereas the modified Nuttli mb(Lg) magnitudes show greater scatter when compared to the worldwide mb reference magnitudes. Whether this result indicates that the rms amplitude measurements are superior to the zero-to-peak amplitude measurement of a single cycle used for the modified Nuttli method, however, cannot be finally assessed, since the calculated mb(Lg) magnitudes are only preliminary until appropriate attenuation corrections are available for the

  17. Production loss due to new subclinical mastitis in Dutch dairy cows estimated iwth a test-day model

    NARCIS (Netherlands)

    Halasa, T.; Nielen, M.; Roos, de S.; Hoorne, van R.; Jong, de G.; Lam, T.J.G.M.; Werven, van T.; Hogeveen, H.

    2009-01-01

    Milk, fat, and protein loss due to a new subclinical mastitis case may be economically important, and the objective of this study was to estimate this loss. The loss was estimated based on test-day (TD) cow records collected over a 1-yr period from 400 randomly selected Dutch dairy herds. After

  18. Effectiveness of Item Response Theory (IRT) Proficiency Estimation Methods under Adaptive Multistage Testing. Research Report. ETS RR-15-11

    Science.gov (United States)

    Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry

    2015-01-01

    The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…

  19. In-situ gas hydrate hydrate saturation estimated from various well logs at the Mount Elbert Gas Hydrate Stratigraphic Test Well, Alaska North Slope

    Science.gov (United States)

    Lee, M.W.; Collett, T.S.

    2011-01-01

    In 2006, the U.S. Geological Survey (USGS) completed detailed analysis and interpretation of available 2-D and 3-D seismic data and proposed a viable method for identifying sub-permafrost gas hydrate prospects within the gas hydrate stability zone in the Milne Point area of northern Alaska. To validate the predictions of the USGS and to acquire critical reservoir data needed to develop a long-term production testing program, a well was drilled at the Mount Elbert prospect in February, 2007. Numerous well log data and cores were acquired to estimate in-situ gas hydrate saturations and reservoir properties.Gas hydrate saturations were estimated from various well logs such as nuclear magnetic resonance (NMR), P- and S-wave velocity, and electrical resistivity logs along with pore-water salinity. Gas hydrate saturations from the NMR log agree well with those estimated from P- and S-wave velocity data. Because of the low salinity of the connate water and the low formation temperature, the resistivity of connate water is comparable to that of shale. Therefore, the effect of clay should be accounted for to accurately estimate gas hydrate saturations from the resistivity data. Two highly gas hydrate-saturated intervals are identified - an upper ???43 ft zone with an average gas hydrate saturation of 54% and a lower ???53 ft zone with an average gas hydrate saturation of 50%; both zones reach a maximum of about 75% saturation. ?? 2009.

  20. Estimation of In Situ Stress and Permeability from an Extended Leak-off Test

    Science.gov (United States)

    Nghiep Quach, Quoc; Jo, Yeonguk; Chang, Chandong; Song, Insun

    2016-04-01

    Among many parameters needed to analyze a variety of geomechanical problems related to subsurface CO2 storage projects, two important ones are in situ stress states and permeability of the storage reservoirs and cap rocks. In situ stress is needed for investigating potential risk of fault slip in the reservoir systems and permeability is needed for assessing reservoir flow characteristics and sealing capability of cap rocks. We used an extended leak-off test (XLOT), which is often routinely conducted to assess borehole/casing integrity as well as fracture gradient, to estimate both in situ least principal stress magnitude and in situ permeability in a CO2 storage test site, offshore southeast Korea. The XLOT was conducted at a casing shoe depth (700 m below seafloor) within the cap rock consisting of mudstone, approximately 50 m above the interface between cap rock and storage reservoir. The test depth was cement-grouted and remained for 4 days for curing. Then the hole was further drilled below the casing shoe to create a 4 m open-hole interval at the bottom. Water was injected using hydraulic pump at an approximately constant flowrate into the bottom interval through the casing, during which pressure and flowrate were recorded continuously at the surface. The interval pressure (P) was increased linearly with time (t) as water was injected. At some point, the slope of P-t curve deviated from the linear trend, which indicates leak-off. Pressure reached its peak upon formation breakdown, followed by a gradual pressure decrease. Soon after the formation breakdown, the hole was shut-in by pump shut-off, from which we determined the instantaneous shut-in pressure (ISIP). The ISIP was taken to be the magnitude of the in situ least principal stress (S3), which was determined to be 12.1 MPa. This value is lower than the lithostatic vertical stress, indicating that the S3 is the least horizontal principal stress. The determined S3 magnitude will be used to characterize the

  1. Performing Pumping Test Data Analysis Applying Cooper-Jacob’s Method for Estimating of the Aquifer Parameters

    OpenAIRE

    Dana Khider Mawlood; Jwan Sabah Mustafa

    2016-01-01

    Single well test is more common than aquifer test with having observation well, since the advantage of single well test is that the pumping test can be conducted on the production well with the absence of observation well. A kind of single well test, which is step-drawdown test used to determine the efficiency and specific capacity of the well, however in case of single well test it is possible to estimate Transmissivity, but the other parameter which is Storativity is overestimated, so the a...

  2. Performing Pumping Test Data Analysis Applying Cooper-Jacob’s Method for Estimating of the Aquifer Parameters

    Directory of Open Access Journals (Sweden)

    Dana Khider Mawlood

    2016-06-01

    Full Text Available Single well test is more common than aquifer test with having observation well, since the advantage of single well test is that the pumping test can be conducted on the production well with the absence of observation well. A kind of single well test, which is step-drawdown test used to determine the efficiency and specific capacity of the well, however in case of single well test it is possible to estimate Transmissivity, but the other parameter which is Storativity is overestimated, so the aim of this study is to analyze four pumping test data located in KAWRGOSK area by using cooper-Jacob’s (1946 time drawdown approximation of Theis method to estimate the aquifer parameters, also in order to determine the reasons which are affecting the reliability of the Storativity value and obtain the important aspect behind that in practice.

  3. Constraints on LISA Pathfinder's Self-Gravity: Design Requirements, Estimates and Testing Procedures

    Science.gov (United States)

    Armano, M.; Audley, H.; Auger, G.; Baird, J.; Binetruy, P.; Born, M.; Bortoluzzi, M.; Brandt, Nico; Bursi, Alessandro; Slutsky. J.; hide

    2016-01-01

    LISA Pathfinder satellite was launched on 3 December 2015 toward the Sun Earth first Lagrangian point (L1) where the LISA Technology Package (LTP), which is the main science payload, will be tested. LTP achieves measurements of differential acceleration of free-falling test masses (TMs) with sensitivity below 3 x 10(exp -14) m s(exp -2) Hz(exp - 1/2) within the 130 mHz frequency band in one dimension. The spacecraft itself is responsible for the dominant differential gravitational field acting on the two TMs. Such a force interaction could contribute a significant amount of noise and thus threaten the achievement of the targeted free-fall level. We prevented this by balancing the gravitational forces to the sub nm s(exp -2) level, guided by a protocol based on measurements of the position and the mass of all parts that constitute the satellite, via finite element calculation tool estimates. In this paper, we will introduce the gravitational balance requirements and design, and then discuss our predictions for the balance that will be achieved in flight.

  4. An estimation methode for measurement of ultraviolet radiation during nondestructive testing

    Science.gov (United States)

    Hosseinipanah, M.; Movafeghi, A.; Farvadin, D.

    2018-04-01

    Dye penetrant testing and magnetic particle testing are among conventional NDT methods. For increased sensitivity, fluorescence dyes and particles can be used with ultraviolet (black) lights. UV flaw detection lights have different spectra. With the help of photo-filters, the output lights are transferred to UV-A and visible zones. UV-A light can be harmful to human eyes in some conditions. In this research, UV intensity and spectrum were obtained by a Radio-spectrometer for two different UV flaw detector lighting systems. According to the standards such as ASTM E709, UV intensity must be at least 10 W/m2 at a distance of 30 cm. Based on our measurements; these features not achieved in some lamps. On the other hand, intensity and effective intensity of UV lights must be below the some limits for prevention of unprotected eye damage. NDT centers are usually using some type of UV measuring devices. A method for the estimation of effective intensity of UV light has been proposed in this research.

  5. Estimation of RPV material embrittlement for Ukrainian NPP based on surveillance test data

    International Nuclear Information System (INIS)

    Revka, V.; Chyrko, L.; Chaikovsky, Yu.; Gulchuk, Yu.

    2012-01-01

    The WWER-1000 RPV material embrittlement has been evaluated using the surveillance test data for the nuclear power plant which is under operation in Ukraine. The RPV materials after the neutron (E > 0,5 MeV) irradiation up to fluence of 22,9.10 22 m -2 have been studied. Fracture toughness tests were performed using pre-cracked Charpy specimens for the beltline materials (base and weld metal). The maximum shift of T 0 reference temperature is equal to 44 o C. A radiation embrittlement rate, A F , for the RPV materials was estimated using the standard and reconstituted specimens. A comparison of the A F values has shown a good agreement between the specimen sets before and after reconstitution both for base and weld metal. Furthermore it has been revealed there is no nickel effect for the studied materials. In spite of the high nickel content the radiation embrittlement rate for weld metal is not higher than for base metal with low nickel content. Fracture toughness analysis has shown the Master curve shape describes well a temperature dependence of K Jc values. However a higher scatter of K Jc values is observed in comparison to 95 % tolerance bounds. (author)

  6. Development and testing of transfer functions for generating quantitative climatic estimates from Australian pollen data

    Science.gov (United States)

    Cook, Ellyn J.; van der Kaars, Sander

    2006-10-01

    We review attempts to derive quantitative climatic estimates from Australian pollen data, including the climatic envelope, climatic indicator and modern analogue approaches, and outline the need to pursue alternatives for use as input to, or validation of, simulations by models of past, present and future climate patterns. To this end, we have constructed and tested modern pollen-climate transfer functions for mainland southeastern Australia and Tasmania using the existing southeastern Australian pollen database and for northern Australia using a new pollen database we are developing. After testing for statistical significance, 11 parameters were selected for mainland southeastern Australia, seven for Tasmania and six for northern Australia. The functions are based on weighted-averaging partial least squares regression and their predictive ability evaluated against modern observational climate data using leave-one-out cross-validation. Functions for summer, annual and winter rainfall and temperatures are most robust for southeastern Australia, while in Tasmania functions for minimum temperature of the coldest period, mean winter and mean annual temperature are the most reliable. In northern Australia, annual and summer rainfall and annual and summer moisture indexes are the strongest. The validation of all functions means all can be applied to Quaternary pollen records from these three areas with confidence. Copyright

  7. Design-related bias in estimates of accuracy when comparing imaging tests: examples from breast imaging research

    International Nuclear Information System (INIS)

    Houssami, Nehmat; Ciatto, Stefano

    2010-01-01

    This work highlights concepts on the potential for design-related factors to bias estimates of test accuracy in comparative imaging research. We chose two design factors, selection of eligible subjects and the reference standard, to examine the effect of design limitations on estimates of accuracy. Estimates of sensitivity in a study of the comparative accuracy of mammography and ultrasound differed according to how subjects were selected. Comparison of a new imaging test with an existing test should distinguish whether the new test is to be used as a replacement for, or as an adjunct to, the conventional test, to guide the method for subject selection. Quality of the reference standard, examined in a meta-analysis of preoperative breast MRI, varied across studies and was associated with estimates of incremental accuracy. Potential solutions to deal with the reference standard are outlined where an ideal reference standard may not be available in all subjects. These examples of breast imaging research demonstrate that design-related bias, when comparing a new imaging test with a conventional imaging test, may bias accuracy in a direction that favours the new test by overestimating the accuracy of the new test or by underestimating that of the conventional test. (orig.)

  8. Estimation of Cadmium uptake by tobacco plants from laboratory leaching tests.

    Science.gov (United States)

    Marković, Jelena P; Jović, Mihajlo D; Smičiklas, Ivana D; Šljivić-Ivanović, Marija Z; Smiljanić, Slavko N; Onjia, Antonije E; Popović, Aleksandar R

    2018-03-21

    The objective of the present study was to determine the impact of cadmium (Cd) concentration in the soil on its uptake by tobacco plants, and to compare the ability of diverse extraction procedures for determining Cd bioavailability and predicting soil-to-plant transfer and Cd plant concentrations. The pseudo-total digestion procedure, modified Tessier sequential extraction and six standard single-extraction tests for estimation of metal mobility and bioavailability were used for the leaching of Cd from a native soil, as well as samples artificially contaminated over a wide range of Cd concentrations. The results of various leaching tests were compared between each other, as well as with the amounts of Cd taken up by tobacco plants in pot experiments. In the native soil sample, most of the Cd was found in fractions not readily available under natural conditions, but with increasing pollution level, Cd amounts in readily available forms increased. With increasing concentrations of Cd in the soil, the quantity of pollutant taken up in tobacco also increased, while the transfer factor (TF) decreased. Linear and non-linear empirical models were developed for predicting the uptake of Cd by tobacco plants based on the results of selected leaching tests. The non-linear equations for ISO 14870 (diethylenetriaminepentaacetic acid extraction - DTPA), ISO/TS 21268-2 (CaCl 2 leaching procedure), US EPA 1311 (toxicity characteristic leaching procedure - TCLP) single step extractions, and the sum of the first two fractions of the sequential extraction, exhibited the best correlation with the experimentally determined concentrations of Cd in plants over the entire range of pollutant concentrations. This approach can improve and facilitate the assessment of human exposure to Cd by tobacco smoking, but may also have wider applicability in predicting soil-to-plant transfer.

  9. A Study to Estimate the Effectiveness of Visual Testing Training for Aviation Maintenance Management

    Science.gov (United States)

    Law, Lewis Lyle

    2007-01-01

    The Air Commerce Act of 1926 set the beginning for standards in aviation maintenance. Even after deregulation in the late l970s, maintenance standards and requirements still have not changed far from their initial criteria. After a potential candidate completes Federal Aviation Administration training prerequisites, they may test for their Airframe and Powerplant (A&P) certificate. Performing maintenance in the aviation industry for a minimum of three years, the technician may then test for their Inspection Authorization (IA). After receiving their Airframe and Powerplant certificate, a technician is said to have a license to perform. At no time within the three years to eligibility for Inspection Authorization are they required to attend higher-level inspection training. What a technician learns in the aviation maintenance industry is handed down from a seasoned technician to the new hire or is developed from lessons learned on the job. Only in Europe has the Joint Aviation Authorities (JAA) required higher-level training for their aviation maintenance technicians in order to control maintenance related accidents (Lu, 2005). Throughout the 1990s both the General Accounting Office (GAO) and the National Transportation Safety Board (NTSB) made public that the FAA is historically understaffed (GAO, 1996). In a safety recommendation the NTSB stated "The Safety Board continues to lack confidence in the FAA's commitment to provide effective quality assurance and safety oversight of the ATC system (NTSB, 1990)." The Federal Aviation Administration (FAA) has been known to be proactive in creating safer skies. With such reports you would suspect the FAA to also be proactive in developing more stringent inspection training for aviation maintenance technicians. The purpose of this study is to estimate the effectiveness of higher-level inspection training, such as Visual Testing (VT) for aviation maintenance technicians, to improve the safety of aircraft and to make

  10. THE IMPORTANCE OF THE STANDARD SAMPLE FOR ACCURATE ESTIMATION OF THE CONCENTRATION OF NET ENERGY FOR LACTATION IN FEEDS ON THE BASIS OF GAS PRODUCED DURING THE INCUBATION OF SAMPLES WITH RUMEN LIQUOR

    Directory of Open Access Journals (Sweden)

    T ŽNIDARŠIČ

    2003-10-01

    Full Text Available The aim of this work was to examine the necessity of using the standard sample at the Hohenheim gas test. During a three year period, 24 runs of forage samples were incubated with rumen liquor in vitro. Beside the forage samples also the standard hay sample provided by the Hohenheim University (HFT-99 was included in the experiment. Half of the runs were incubated with rumen liquor of cattle and half with the rumen liquor of sheep. Gas produced during the 24 h incubation of standard sample was measured and compared to a declared value of sample HFT-99. Beside HFT-99, 25 test samples with known digestibility coefficients determined in vivo were included in the experiment. Based on the gas production of HFT-99, it was found that donor animal (cattle or sheep did not significantly affect the activity of rumen liquor (41.4 vs. 42.2 ml of gas per 200 mg dry matter, P>0.1. Neither differences between years (41.9, 41.2 and 42.3 ml of gas per 200 mg dry matter, P>0.1 were significant. However, a variability of about 10% (from 38.9 to 43.7 ml of gas per 200 mg dry matter was observed between runs. In the present experiment, the gas production in HFT-99 was about 6% lower than the value obtained by the Hohenheim University (41.8 vs. 44.43 ml per 200 mg dry matter. This indicates a systematic error between the laboratories. In the case of twenty-five test samples, correction on the basis of the standard sample reduced the average difference of the in vitro estimates of net energy for lactation (NEL from the in vivo determined values. It was concluded that, due to variation between runs and systematical differences in rumen liquor activity between two laboratories, the results of Hohenheim gas test have to be corrected on the basis of standard sample.

  11. Estimating Hydraulic Conductivities in a Fractured Shale Formation from Pressure Pulse Testing and 3d Modeling

    Science.gov (United States)

    Courbet, C.; DICK, P.; Lefevre, M.; Wittebroodt, C.; Matray, J.; Barnichon, J.

    2013-12-01

    In the framework of its research on the deep disposal of radioactive waste in shale formations, the French Institute for Radiological Protection and Nuclear Safety (IRSN) has developed a large array of in situ programs concerning the confining properties of shales in their underground research laboratory at Tournemire (SW France). One of its aims is to evaluate the occurrence and processes controlling radionuclide migration through the host rock, from the disposal system to the biosphere. Past research programs carried out at Tournemire covered mechanical, hydro-mechanical and physico-chemical properties of the Tournemire shale as well as water chemistry and long-term behaviour of the host rock. Studies show that fluid circulations in the undisturbed matrix are very slow (hydraulic conductivity of 10-14 to 10-15 m.s-1). However, recent work related to the occurrence of small scale fractures and clay-rich fault gouges indicate that fluid circulations may have been significantly modified in the vicinity of such features. To assess the transport properties associated with such faults, IRSN designed a series of in situ and laboratory experiments to evaluate the contribution of both diffusive and advective process on water and solute flux through a clay-rich fault zone (fault core and damaged zone) and in an undisturbed shale formation. As part of these studies, Modular Mini-Packer System (MMPS) hydraulic testing was conducted in multiple boreholes to characterize hydraulic conductivities within the formation. Pressure data collected during the hydraulic tests were analyzed using the nSIGHTS (n-dimensional Statistical Inverse Graphical Hydraulic Test Simulator) code to estimate hydraulic conductivity and formation pressures of the tested intervals. Preliminary results indicate hydraulic conductivities of 5.10-12 m.s-1 in the fault core and damaged zone and 10-14 m.s-1 in the adjacent undisturbed shale. Furthermore, when compared with neutron porosity data from borehole

  12. Application of Integral Pumping Tests to estimate the influence of losing streams on groundwater quality

    Science.gov (United States)

    Leschik, S.; Musolff, A.; Reinstorf, F.; Strauch, G.; Schirmer, M.

    2009-05-01

    Urban streams receive effluents of wastewater treatment plants and untreated wastewater during combined sewer overflow events. In the case of losing streams substances, which originate from wastewater, can reach the groundwater and deteriorate its quality. The estimation of mass flow rates Mex from losing streams to the groundwater is important to support groundwater management strategies, but is a challenging task. Variable inflow of wastewater with time-dependent concentrations of wastewater constituents causes a variable water composition in urban streams. Heterogeneities in the structure of the streambed and the connected aquifer lead, in combination with this variable water composition, to heterogeneous concentration patterns of wastewater constituents in the vicinity of urban streams. Groundwater investigation methods based on conventional point sampling may yield unreliable results under these conditions. Integral Pumping Tests (IPT) can overcome the problem of heterogeneous concentrations in an aquifer by increasing the sampled volume. Long-time pumping (several days) and simultaneous sampling yields reliable average concentrations Cav and mass flow rates Mcp for virtual control planes perpendicular to the natural flow direction. We applied the IPT method in order to estimate Mex of a stream section in Leipzig (Germany). The investigated stream is strongly influenced by combined sewer overflow events. Four pumping wells were installed up- and downstream of the stream section and operated for a period of five days. The study was focused on four inorganic (potassium, chloride, nitrate and sulfate) and two organic (caffeine and technical-nonylphenol) wastewater constituents with different transport properties. The obtained concentration-time series were used in combination with a numerical flow model to estimate Mcp of the respective wells. The difference of the Mcp's between up- and downstream wells yields Mex of wastewater constituents that increase

  13. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test

    NARCIS (Netherlands)

    Stuiver, Martijn M.; Kampshoff, Caroline S.; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J. M.; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M.

    2017-01-01

    Objective: To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo2(peak)) and peak power output (W-peak).&

  14. Development of the Korean Adult Reading Test (KART to estimate premorbid intelligence in dementia patients.

    Directory of Open Access Journals (Sweden)

    Dahyun Yi

    Full Text Available We aimed to develop a word-reading test for Korean-speaking adults using irregularly pronounced words that would be useful for estimation of premorbid intelligence. A linguist who specialized in Korean phonology selected 94 words that have irregular relationship between orthography and phonology. Sixty cognitively normal elderly (CN and 31 patients with Alzheimer's disease (AD were asked to read out loud the words and were administered the Wechsler Adult Intelligence Scale, 4th edition, Korean version (K-WAIS-IV. Among the 94 words, 50 words that did not show a significant difference between the CN and the AD group were selected and constituted the KART. Using the 30 CN calculation group (CNc, a linear regression equation was obtained in which the observed full-scale IQ (FSIQ was regressed on the reading errors of the KART, where education was included as an additional variable. When the regressed equation computed from the CNc was applied to 30 CN individuals of the validation group (CNv, the predicted FSIQ adequately fit the observed FSIQ (R2 = 0.63. In addition, independent sample t-test showed that the KART-predicted IQs were not significantly different between the CNv and AD groups, whereas the performance of the AD group was significantly worse in the observed IQs. In addition, an extended validation of the KART was performed with a separate sample consisted of 84 CN, 56 elderly with mild cognitive impairment (MCI, and 43 AD patients who were administered comprehensive neuropsychological assessments in addition to the KART. When the equation obtained from the CNc was applied to the extended validation sample, the KART-predicted IQs of the AD, MCI and the CN groups did not significantly differ, whereas their current global cognition scores significantly differed between the groups. In conclusion, the results support the validity of KART-predicted IQ as an index of premorbid IQ in individuals with AD.

  15. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  16. Testing and estimating time-varying elasticities of Swiss gasoline demand

    International Nuclear Information System (INIS)

    Neto, David

    2012-01-01

    This paper is intended to test and estimate time-varying elasticities for gasoline demand in Switzerland. For this purpose, a smooth time-varying cointegrating parameters model is investigated in order to describe smooth mutations of the Swiss gasoline demand. The methodology, based on Chebyshev polynomials, is rigorously outlined. Our empirical finding states that the time-invariance assumption does not hold for long-run price and income elasticities. Furthermore they highlight that gasoline demand passed through some periods of sensitivity and non sensitivity with respect to the price. Our empirical statements are of great importance to assess the performance of a gasoline tax as an instrument for CO 2 reduction policy. Indeed, such an instrument can contribute to reduce emissions of greenhouse gases only if the demand is not fully inelastic with respect to the price. Our results suggest that such a carbon-tax would not be always suitable since the price elasticity is found not stable over time and not always significant.

  17. Two-phase flow patterns recognition and parameters estimation through natural circulation test loop image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Mesquita, R.N.; Libardi, R.M.P.; Masotti, P.H.F.; Sabundjian, G.; Andrade, D.A.; Umbehaun, P.E.; Torres, W.M.; Conti, T.N.; Macedo, L.A. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil). Nuclear Engineering Center], e-mail: rnavarro@ipen.br

    2009-07-01

    Visualization of natural circulation test loop cycles is used to study two-phase flow patterns associated with phase transients and static instabilities of flow. Experimental studies on natural circulation flow were originally related to accidents and transient simulations relative to nuclear reactor systems with light water refrigeration. In this regime, fluid circulation is mainly caused by a driving force ('thermal head') which arises from density differences due to temperature gradient. Natural circulation phenomenon has been important to provide residual heat removal in cases of 'loss of pump power' or plant shutdown in nuclear power plant accidents. The new generation of compact nuclear reactors includes natural circulation of their refrigerant fluid as a security mechanism in their projects. Two-phase flow patterns have been studied for many decades, and the related instabilities have been object of special attention recently. Experimental facility is an all glass-made cylindrical tubes loop which contains about twelve demineralized water liters, a heat source by an electrical resistor immersion heater controlled by a Variac, and a helicoidal heat exchanger working as cold source. Data is obtained through thermo-pairs distributed over the loop and CCD cameras. Artificial intelligence based algorithms are used to improve (bubble) border detection and patterns recognition, in order to estimate and characterize, phase transitions patterns and correlate them with the periodic static instability (chugging) cycle observed in this circuit. Most of initial results show good agreement with previous numerical studies in this same facility. (author)

  18. Two-phase flow patterns recognition and parameters estimation through natural circulation test loop image analysis

    International Nuclear Information System (INIS)

    Mesquita, R.N.; Libardi, R.M.P.; Masotti, P.H.F.; Sabundjian, G.; Andrade, D.A.; Umbehaun, P.E.; Torres, W.M.; Conti, T.N.; Macedo, L.A.

    2009-01-01

    Visualization of natural circulation test loop cycles is used to study two-phase flow patterns associated with phase transients and static instabilities of flow. Experimental studies on natural circulation flow were originally related to accidents and transient simulations relative to nuclear reactor systems with light water refrigeration. In this regime, fluid circulation is mainly caused by a driving force ('thermal head') which arises from density differences due to temperature gradient. Natural circulation phenomenon has been important to provide residual heat removal in cases of 'loss of pump power' or plant shutdown in nuclear power plant accidents. The new generation of compact nuclear reactors includes natural circulation of their refrigerant fluid as a security mechanism in their projects. Two-phase flow patterns have been studied for many decades, and the related instabilities have been object of special attention recently. Experimental facility is an all glass-made cylindrical tubes loop which contains about twelve demineralized water liters, a heat source by an electrical resistor immersion heater controlled by a Variac, and a helicoidal heat exchanger working as cold source. Data is obtained through thermo-pairs distributed over the loop and CCD cameras. Artificial intelligence based algorithms are used to improve (bubble) border detection and patterns recognition, in order to estimate and characterize, phase transitions patterns and correlate them with the periodic static instability (chugging) cycle observed in this circuit. Most of initial results show good agreement with previous numerical studies in this same facility. (author)

  19. Estimation of an Examinee's Ability in the Web-Based Computerized Adaptive Testing Program IRT-CAT

    Directory of Open Access Journals (Sweden)

    Yoon-Hwan Lee

    2006-11-01

    Full Text Available We developed a program to estimate an examinee's ability in order to provide freely available access to a web-based computerized adaptive testing (CAT program. We used PHP and Java Script as the program languages, PostgresSQL as the database management system on an Apache web server and Linux as the operating system. A system which allows for user input and searching within inputted items and creates tests was constructed. We performed an ability estimation on each test based on a Rasch model and 2- or 3-parametric logistic models. Our system provides an algorithm for a web-based CAT, replacing previous personal computer-based ones, and makes it possible to estimate an examinee?占퐏 ability immediately at the end of test.

  20. Development of the town data base: Estimates of exposure rates and times of fallout arrival near the Nevada Test Site

    International Nuclear Information System (INIS)

    Thompson, C.B.; McArthur, R.D.; Hutchinson, S.W.

    1994-09-01

    As part of the U.S. Department of Energy's Off-Site Radiation Exposure Review Project, the time of fallout arrival and the H+12 exposure rate were estimated for populated locations in Arizona, California, Nevada, and Utah that were affected by fallout from one or more nuclear tests at the Nevada Test Site. Estimates of exposure rate were derived from measured values recorded before and after each test by fallout monitors in the field. The estimate for a given location was obtained by retrieving from a data base all measurements made in the vicinity, decay-correcting them to H+12, and calculating an average. Estimates were also derived from maps produced after most events that show isopleths of exposure rate and time of fallout arrival. Both sets of isopleths on these maps were digitized, and kriging was used to interpolate values at the nodes of a 10-km grid covering the pattern. The values at any location within the grid were then estimated from the values at the surrounding grid nodes. Estimates of dispersion (standard deviation) were also calculated. The Town Data Base contains the estimates for all combinations of location and nuclear event for which the estimated mean H+12 exposure rate was greater than three times background. A listing of the data base is included as an appendix. The information was used by other project task groups to estimate the radiation dose that off-site populations and individuals may have received as a result of exposure to fallout from Nevada nuclear tests

  1. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  2. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-02-12

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative

  3. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    Science.gov (United States)

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative

  4. Bayesian estimation of test characteristics of real-time PCR, bacteriological culture and California mastitis test for diagnosis of intramammary infections with Staphylococcus aureus in dairy cattle at routine milk recordings.

    Science.gov (United States)

    Mahmmod, Yasser S; Toft, Nils; Katholm, Jørgen; Grønbæk, Carsten; Klaas, Ilka C

    2013-11-01

    Danish farmers can order a real-time PCR mastitis diagnostic test on routinely taken cow-level samples from milk recordings. Validation of its performance in comparison to conventional mastitis diagnostics under field conditions is essential for efficient control of intramammary infections (IMI) with Staphylococcus aureus (S. aureus). Therefore, the objective of this study was to estimate the sensitivity (Se) and specificity (Sp) of real-time PCR, bacterial culture (BC) and California mastitis test (CMT) for the diagnosis of the naturally occurring IMI with S. aureus in routinely collected milk samples using latent class analysis (LCA) to avoid the assumption of a perfect reference test. Using systematic random sampling, a total of 609 lactating dairy cows were selected from 6 dairy herds with bulk tank milk PCR cycle threshold (Ct) value ≤39 for S. aureus. At routine milk recordings, automatically obtained cow-level (composite) milk samples were analyzed by PCR and at the same milking, 2436 quarter milk samples were collected aseptically for BC and CMT. Results showed that 140 cows (23%) were positive for S. aureus IMI by BC while 170 cows (28%) were positive by PCR. Estimates of Se and Sp for PCR were higher than test estimates of BC and CMT. SeCMT was higher than SeBC however, SpBC was higher than SpCMT. SePCR was 91%, while SeBC was 53%, and SeCMT was 61%. SpPCR was 99%, while SpBC was 89%, and SpCMT was 65%. In conclusion, PCR has a higher performance than the conventional diagnostic tests (BC and CMT) suggesting its usefulness as a routine test for accurate diagnosis of S. aureus IMI from dairy cows at routine milk recordings. The use of LCA provided estimates of the test characteristics for two currently diagnostic tests (BC, CMT) and a novel technique (real-time PCR) for diagnosing S. aureus IMI under field conditions at routine milk recordings in Denmark. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. A test of alternative estimators for volume at time 1 from remeasured point samples

    Science.gov (United States)

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  6. A review of the effects on IRT item parameter estimates with a focus on misbehaving common items in test equating

    Directory of Open Access Journals (Sweden)

    Michalis P Michaelides

    2010-10-01

    Full Text Available Many studies have investigated the topic of change or drift in item parameter estimates in the context of Item Response Theory. Content effects, such as instructional variation and curricular emphasis, as well as context effects, such as the wording, position, or exposure of an item have been found to impact item parameter estimates. The issue becomes more critical when items with estimates exhibiting differential behavior across test administrations are used as common for deriving equating transformations. This paper reviews the types of effects on IRT item parameter estimates and focuses on the impact of misbehaving or aberrant common items on equating transformations. Implications relating to test validity and the judgmental nature of the decision to keep or discard aberrant common items are discussed, with recommendations for future research into more informed and formal ways of dealing with misbehaving common items.

  7. A Review of the Effects on IRT Item Parameter Estimates with a Focus on Misbehaving Common Items in Test Equating.

    Science.gov (United States)

    Michaelides, Michalis P

    2010-01-01

    Many studies have investigated the topic of change or drift in item parameter estimates in the context of item response theory (IRT). Content effects, such as instructional variation and curricular emphasis, as well as context effects, such as the wording, position, or exposure of an item have been found to impact item parameter estimates. The issue becomes more critical when items with estimates exhibiting differential behavior across test administrations are used as common for deriving equating transformations. This paper reviews the types of effects on IRT item parameter estimates and focuses on the impact of misbehaving or aberrant common items on equating transformations. Implications relating to test validity and the judgmental nature of the decision to keep or discard aberrant common items are discussed, with recommendations for future research into more informed and formal ways of dealing with misbehaving common items.

  8. Predictive value of preoperative tests in estimating difficult intubation in patients who underwent direct laryngoscopy in ear, nose, and throat surgery

    Directory of Open Access Journals (Sweden)

    Osman Karakus

    2015-04-01

    Full Text Available BACKGROUND AND OBJECTIVES: Predictive value of preoperative tests in estimating difficult intubation may differ in the laryngeal pathologies. Patients who had undergone direct laryngoscopy (DL were reviewed, and predictive value of preoperative tests in estimating difficult intubation was investigated. METHODS: Preoperative, and intraoperative anesthesia record forms, and computerized system of the hospital were screened. RESULTS: A total of 2611 patients were assessed. In 7.4% of the patients, difficult intubations were detected. Difficult intubations were encountered in some of the patients with Mallampati scoring (MS system Class 4 (50%, Cormack-Lehane classification (CLS Grade 4 (95.7%, previous knowledge of difficult airway (86.2%, restricted neck movements (cervical ROM (75.8%, short thyromental distance (TMD (81.6%, vocal cord mass (49.5% as indicated in parentheses (p < 0.0001. MS had a low sensitivity, while restricted cervical ROM, presence of a vocal cord mass, short thyromental distance, and MS each had a relatively higher positive predictive value. Incidence of difficult intubations increased 6.159 and 1.736-fold with each level of increase in CLS grade and MS class, respectively. When all tests were considered in combination difficult intubation could be classified accurately in 96.3% of the cases. CONCLUSION: Test results predicting difficult intubations in cases with DL had observedly overlapped with the results provided in the literature for the patient populations in general. Differences in some test results when compared with those of the general population might stem from the concomitant underlying laryngeal pathological conditions in patient populations with difficult intubation.

  9. [Predictive value of preoperative tests in estimating difficult intubation in patients who underwent direct laryngoscopy in ear, nose, and throat surgery].

    Science.gov (United States)

    Karakus, Osman; Kaya, Cengiz; Ustun, Faik Emre; Koksal, Ersin; Ustun, Yasemin Burcu

    2015-01-01

    Predictive value of preoperative tests in estimating difficult intubation may differ in the laryngeal pathologies. Patients who had undergone direct laryngoscopy (DL) were reviewed, and predictive value of preoperative tests in estimating difficult intubation was investigated. Preoperative, and intraoperative anesthesia record forms, and computerized system of the hospital were screened. A total of 2611 patients were assessed. In 7.4% of the patients, difficult intubations were detected. Difficult intubations were encountered in some of the patients with Mallampati scoring (MS) system Class 4 (50%), Cormack-Lehane classification (CLS) Grade 4 (95.7%), previous knowledge of difficult airway (86.2%), restricted neck movements (cervical ROM) (75.8%), short thyromental distance (TMD) (81.6%), vocal cord mass (49.5%) as indicated in parentheses (p<0.0001). MS had a low sensitivity, while restricted cervical ROM, presence of a vocal cord mass, short thyromental distance, and MS each had a relatively higher positive predictive value. Incidence of difficult intubations increased 6.159 and 1.736-fold with each level of increase in CLS grade and MS class, respectively. When all tests were considered in combination difficult intubation could be classified accurately in 96.3% of the cases. Test results predicting difficult intubations in cases with DL had observedly overlapped with the results provided in the literature for the patient populations in general. Differences in some test results when compared with those of the general population might stem from the concomitant underlying laryngeal pathological conditions in patient populations with difficult intubation. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.

  10. Estimation of uncertainty of a reference material for proficiency testing for the determination of total mercury in fish in nature

    International Nuclear Information System (INIS)

    Santana, L V; Sarkis, J E S; Ulrich, J C; Hortellani, M A

    2015-01-01

    We provide an uncertainty estimates for homogeneity and stability studies of reference material used in proficiency test for determination of total mercury in fish fresh muscle tissue. Stability was estimated by linear regression and homogeneity by ANOVA. The results indicate that the reference material is both homogeneous and chemically stable over the short term. Total mercury concentration of the muscle tissue, with expanded uncertainty, was 0.294 ± 0.089 μg g −1

  11. The estimation of hemodynamic signals measured by fNIRS response to cold pressor test

    Science.gov (United States)

    Ansari, M. A.; Fazliazar, E.

    2018-04-01

    The estimation of cerebral hemodynamic signals has an important role for monitoring the stage of neurological diseases. Functional Near-Infrared Spectroscopy (fNIRS) can be used for monitoring of brain activities. fNIRS utilizes light in the near-infrared spectrum (650-1000 nm) to study the response of the brain vasculature to the changes in neural activities, called neurovascular coupling, within the cortex when cognitive activation occurs. The neurovascular coupling may be disrupted in the brain pathological condition. Therefore, we can also use fNIRS to diagnosis brain pathological conditions or to monitor the efficacy of related treatments. The Cold pressor test (CPT), followed by immersion of dominant hand or foot in the ice water, can induce cortical activities. The perception of pain induced by CPT can be related to cortical neurovascular coupling. Hence, the variation of cortical hemodynamic signals during CPT can be an indicator for studying neurovascular coupling. Here, we study the effect of pain induced by CPT on the temporal variation of concentration of oxyhemoglobin [HbO2] and deoxyhemoglobin [Hb] in the healthy brains. We use fNIRS data collected on forehead during a CPT from 11 healthy subjects, and the average data are compared with post-stimulus pain rating scores. The results show that the variation of [Hb] and [HbO2] are positively correlated with self-reported scores during the CPT. These results depict that fNIRS can be potentially applied to study the decoupling of neurovascular process in brain pathological conditions.

  12. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  13. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  14. Estimation of diagnostic performance of dementia screening tests: Mini-Mental State Examination, Mini-Cog, Clock Drawing test and Ascertain Dementia 8 questionnaire.

    Science.gov (United States)

    Yang, Li; Yan, Jing; Jin, Xiaoqing; Jin, Yu; Yu, Wei; Xu, Shanhu; Wu, Haibin; Xu, Ying; Liu, Caixia

    2017-05-09

    Dementia is one of the leading causes of dependence in the elderly. This study was conducted to estimate diagnostic performance of dementia screening tests including Mini-Mental State Examination (MMSE), Mini-Cog, Clock Drawing Test (CDT) and Ascertain Dementia 8 questionnaire (AD8) by Bayesian models. A total of 2015 participants aged 65 years or more in eastern China were enrolled. The four screening tests were administered and scored by specifically trained psychiatrists. The prior information of sensitivity and specificity of every screening test was updated via Bayes' theorem to a posterior distribution. Then the results were compared with the estimation based on National Institute of Aging-Alzheimer's Association criteria (NIA-AA). The diagnostic characteristics of Mini-Cog, including sensitivity, specificity, PPV, NPV, especially the Youden index, performed well, even better than the combinations of several screening tests. The Mini-Cog with excellent screening characteristics, spending less time, could be considered to be used as a screening test to help to screen patients with cognitive impairment or dementia early. And Bayesian method was shown to be a suitable tool for evaluating dementia screening tests. The Mini-Cog with excellent screening characteristics, spending less time, could be considered to be used as a screening test to help to screen patients with cognitive impairment or dementia early. And Bayesian method was shown to be a suitable tool for evaluating dementia screening tests.

  15. Estimate of whole body doses for Lynette Tew and Becky Farnsworth from Nevada Test Site local fallout

    International Nuclear Information System (INIS)

    Anspaugh, L.R.; Ng, Y.C.

    1985-01-01

    Lynette Tew and Becky Farnsworth are decendents whose relatives are litigants in Timothy vs US. The litigants allege that the decendents were harmed by radiation doses received as a result of local fallout from the testing of nuclear weapons at the Nevada Test Site. We have calculated a best estimate of the whole body dose received by each decendent from external exposure and the ingestion of radionuclides with food. In each case the dose via ingestion is trivial compared to the external dose. For Lynette Tew the dose estimate is 0.28 rads. For Becky Farnsworth it is 0.0035 rads. 23 references, 4 tables

  16. Monte Carlo Method to Study Properties of Acceleration Factor Estimation Based on the Test Results with Varying Load

    Directory of Open Access Journals (Sweden)

    N. D. Tiannikova

    2014-01-01

    Full Text Available G.D. Kartashov has developed a technique to determine the rapid testing results scaling functions to the normal mode. Its feature is preliminary tests of products of one sample including tests using the alternating modes. Standard procedure of preliminary tests (researches is as follows: n groups of products with m elements in each start being tested in normal mode and, after a failure of one of products in the group, the remained products are tested in accelerated mode. In addition to tests in alternating mode, tests in constantly normal mode are conducted as well. The acceleration factor of rapid tests for this type of products, identical to any lots is determined using such testing results of products from the same lot. A drawback of this technique is that tests are to be conducted in alternating mode till the failure of all products. That is not always is possible. To avoid this shortcoming, the Renyi criterion is offered. It allows us to determine scaling functions using the right-censored data thus giving the opportunity to stop testing prior to all failures of products.In this work a statistical modeling of the acceleration factor estimation owing to Renyi statistics minimization is implemented by the Monte-Carlo method. Results of modeling show that the acceleration factor estimation obtained through Renyi statistics minimization is conceivable for rather large n . But for small sample volumes some systematic bias of acceleration factor estimation, which decreases with growth n is observed for both distributions (exponential and Veybull's distributions. Therefore the paper also presents calculation results of correction factors for a case of exponential distribution and Veybull's distribution.

  17. The impact of interpreted flow regimes during constant head injection tests on the estimated transmissivity from injection tests and difference flow logging

    Energy Technology Data Exchange (ETDEWEB)

    Hjerne, Calle; Ludvigsson, Jan-Erik; Harrstroem, Johan [Geosigma AB, Uppsala (Sweden)

    2013-04-15

    A large number of constant head injection tests were carried out in the site investigation at Forsmark using the Pipe String System, PSS3. During the original evaluation of the tests the dominating transient flow regimes during both the injection and recovery period were interpreted together with estimation of hydraulic parameters. The flow regimes represent different flow and boundary conditions during the tests. Different boreholes or borehole intervals may display different distributions of flow regimes. In some boreholes good agreement was obtained between the results of the injection tests and difference flow logging with Posiva flow log (PFL) but in other boreholes significant discrepancies were found. The main objective of this project is to study the correlation between transient flow regimes from the injection tests and other borehole features such as transmissivity, depth, geology, fracturing etc. Another subject studied is whether observed discrepancies between estimated transmissivity from difference flow logging and injection tests can be correlated to interpreted flow regimes. Finally, a detailed comparison between transient and stationary evaluation of transmissivity from the injection tests in relation to estimated transmissivity from PFL tests in corresponding sections is made. Results from previous injection tests in 5 m sections in boreholes KFM04, KFM08A and KFM10A were used. Only injection tests above the (test-specific) measurement limit regarding flow rate are included in the analyses. For all of these tests transient flow regimes were interpreted. In addition, results from difference flow logging in the corresponding 5 m test sections were used. Finally, geological data of fractures together with rock and fracture zone properties have been used in the correlations. Flow regimes interpreted from the injection period of the tests are generally used in the correlations but deviations between the interpreted flow regimes from the injection and

  18. Polish Adult Reading Test (PART) - construction of Polish test for estimating the level of premorbid intelligence in schizophrenia.

    Science.gov (United States)

    Karakuła-Juchnowicz, Hanna; Stecka, Mariola

    2017-08-29

    In view of unavailability in Poland of the standardized methods to measure PIQ, the aim of the work was to develop a Polish test to assess the premorbid level of intelligence - PART(Polish AdultReading Test) and to measureits psychometric properties, such as validity, reliability as well as standardization in the group of schizophrenia patients. The principles of PART construction were based on the idea of popular worldwide National Adult Reading Test by Hazel Nelson. The research comprised a group of 122 subjects (65 schizophrenia patients and 57 healthy people), aged 18-60 years, matched for age and gender. PART appears to be a method with high internal consistency and reliability measured by test-retest, inter-rater reliability, and the method with acceptable diagnostic and prognostic validity. The standardized procedures of PART have been investigated and described. Considering the psychometric values of PART and a short time of its performance, the test may be a useful diagnostic instrument in the assessment of premorbid level of intelligence in a group of schizophrenic patients.

  19. VHTRC experiment for verification test of H{infinity} reactivity estimation method

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, Yoshio; Suzuki, Katsuo; Akino, Fujiyoshi; Yamane, Tsuyoshi; Fujisaki, Shingo; Takeuchi, Motoyoshi; Ono, Toshihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-02-01

    This experiment was performed at VHTRC to acquire the data for verifying the H{infinity} reactivity estimation method. In this report, the experimental method, the measuring circuits and data processing softwares are described in details. (author).

  20. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  1. HIV testing during pregnancy: use of secondary data to estimate 2006 test coverage and prevalence in Brazil

    Directory of Open Access Journals (Sweden)

    Célia Landmann Szwarcwald

    Full Text Available This paper describes a methodological proposal based on secondary data and the main results of the HIV-Sentinel Study among childbearing women, carried out in Brazil during 2006. A probabilistic sample of childbearing women was selected in two stages. In the first stage, 150 health establishments were selected, stratified by municipality size (<50,000; 50,000-399,999; 400,000+. In the second stage, 100-120 women were selected systematically. Data collection was based on HIV-test results registered in pre-natal cards and in hospital records. The analysis focused on coverage of HIV-testing during pregnancy and HIV prevalence rate. Logistic regression models were used to test inequalities in HIV-testing coverage during pregnancy by macro-region of residence, municipality size, race, educational level and age group. The study included 16,158 women. Results were consistent with previous studies based on primary data collection. Among the women receiving pre-natal care with HIV-test results registered in their pre-natal cards, HIV prevalence was 0.41%. Coverage of HIV-testing during pregnancy was 62.3% in the country as a whole, but ranged from 40.6% in the Northeast to 85.8% in the South. Significant differences according to race, educational level and municipality size were also found. The proposed methodology is low-cost, easy to apply, and permits identification of problems in routine service provision, in addition to monitoring compliance with Ministry of Health recommendations for pre-natal care.

  2. Estimating diagnostic test accuracies for Brachyspira hyodysenteriae accounting for the complexities of population structure in food animals.

    Directory of Open Access Journals (Sweden)

    Sonja Hartnack

    Full Text Available For swine dysentery, which is caused by Brachyspira hyodysenteriae infection and is an economically important disease in intensive pig production systems worldwide, a perfect or error-free diagnostic test ("gold standard" is not available. In the absence of a gold standard, Bayesian latent class modelling is a well-established methodology for robust diagnostic test evaluation. In contrast to risk factor studies in food animals, where adjustment for within group correlations is both usual and required for good statistical practice, diagnostic test evaluation studies rarely take such clustering aspects into account, which can result in misleading results. The aim of the present study was to estimate test accuracies of a PCR originally designed for use as a confirmatory test, displaying a high diagnostic specificity, and cultural examination for B. hyodysenteriae. This estimation was conducted based on results of 239 samples from 103 herds originating from routine diagnostic sampling. Using Bayesian latent class modelling comprising of a hierarchical beta-binomial approach (which allowed prevalence across individual herds to vary as herd level random effect, robust estimates for the sensitivities of PCR and culture, as well as for the specificity of PCR, were obtained. The estimated diagnostic sensitivity of PCR (95% CI and culture were 73.2% (62.3; 82.9 and 88.6% (74.9; 99.3, respectively. The estimated specificity of the PCR was 96.2% (90.9; 99.8. For test evaluation studies, a Bayesian latent class approach is well suited for addressing the considerable complexities of population structure in food animals.

  3. On the Estimation of Disease Prevalence by Latent Class Models for Screening Studies Using Two Screening Tests with Categorical Disease Status Verified in Test Positives Only

    Science.gov (United States)

    Chu, Haitao; Zhou, Yijie; Cole, Stephen R.; Ibrahim, Joseph G.

    2010-01-01

    Summary To evaluate the probabilities of a disease state, ideally all subjects in a study should be diagnosed by a definitive diagnostic or gold standard test. However, since definitive diagnostic tests are often invasive and expensive, it is generally unethical to apply them to subjects whose screening tests are negative. In this article, we consider latent class models for screening studies with two imperfect binary diagnostic tests and a definitive categorical disease status measured only for those with at least one positive screening test. Specifically, we discuss a conditional independent and three homogeneous conditional dependent latent class models and assess the impact of misspecification of the dependence structure on the estimation of disease category probabilities using frequentist and Bayesian approaches. Interestingly, the three homogeneous dependent models can provide identical goodness-of-fit but substantively different estimates for a given study. However, the parametric form of the assumed dependence structure itself is not “testable” from the data, and thus the dependence structure modeling considered here can only be viewed as a sensitivity analysis concerning a more complicated non-identifiable model potentially involving heterogeneous dependence structure. Furthermore, we discuss Bayesian model averaging together with its limitations as an alternative way to partially address this particularly challenging problem. The methods are applied to two cancer screening studies, and simulations are conducted to evaluate the performance of these methods. In summary, further research is needed to reduce the impact of model misspecification on the estimation of disease prevalence in such settings. PMID:20191614

  4. An Extended Quadratic Frobenius Primality Test with Average and Worst Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin