WorldWideScience

Sample records for measured providing estimates

  1. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  2. 49 CFR 375.409 - May household goods brokers provide estimates?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false May household goods brokers provide estimates? 375... Estimating Charges § 375.409 May household goods brokers provide estimates? A household goods broker must not... there is a written agreement between the broker and you, the carrier, adopting the broker's estimate as...

  3. Using 210Pb measurements to estimate sedimentation rates on river floodplains

    International Nuclear Information System (INIS)

    Du, P.; Walling, D.E.

    2012-01-01

    Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides 137 Cs and excess 210 Pb to estimate medium-term (10–10 2 years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of 137 Cs. However, the use of excess 210 Pb potentially offers a number of advantages over 137 Cs measurements. Most existing investigations that have used excess 210 Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess 210 Pb and 137 Cs were made on these cores. The 210 Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The 137 Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the 210 Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total 210 Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by the 137 Cs and excess 210 Pb

  4. Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.

    2014-11-01

    Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.

  5. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  6. Providing low-budget estimations of carbon sequestration and greenhouse gas emissions in agricultural wetlands

    International Nuclear Information System (INIS)

    Lloyd, Colin R; Rebelo, Lisa-Maria; Max Finlayson, C

    2013-01-01

    The conversion of wetlands to agriculture through drainage and flooding, and the burning of wetland areas for agriculture have important implications for greenhouse gas (GHG) production and changing carbon stocks. However, the estimation of net GHG changes from mitigation practices in agricultural wetlands is complex compared to dryland crops. Agricultural wetlands have more complicated carbon and nitrogen cycles with both above- and below-ground processes and export of carbon via vertical and horizontal movement of water through the wetland. This letter reviews current research methodologies in estimating greenhouse gas production and provides guidance on the provision of robust estimates of carbon sequestration and greenhouse gas emissions in agricultural wetlands through the use of low cost reliable and sustainable measurement, modelling and remote sensing applications. The guidance is highly applicable to, and aimed at, wetlands such as those in the tropics and sub-tropics, where complex research infrastructure may not exist, or agricultural wetlands located in remote regions, where frequent visits by monitoring scientists prove difficult. In conclusion, the proposed measurement-modelling approach provides guidance on an affordable solution for mitigation and for investigating the consequences of wetland agricultural practice on GHG production, ecological resilience and possible changes to agricultural yields, variety choice and farming practice. (letter)

  7. Sex estimation from sternal measurements using multidetector computed tomography.

    Science.gov (United States)

    Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Bilgili, Mustafa Gokhan; Solmaz, Dilek; Erdil, Irem; Can, Ismail Ozgur

    2014-12-01

    We aimed to show the utility and reliability of sternal morphometric analysis for sex estimation.Sex estimation is a very important step in forensic identification. Skeletal surveys are main methods for sex estimation studies. Morphometric analysis of sternum may provide high accuracy rated data in sex discrimination. In this study, morphometric analysis of sternum was evaluated in 1 mm chest computed tomography scans for sex estimation. Four hundred forty 3 subjects (202 female, 241 male, mean age: 44 ± 8.1 [distribution: 30-60 year old]) were included the study. Manubrium length (ML), mesosternum length (2L), Sternebra 1 (S1W), and Sternebra 3 (S3W) width were measured and also sternal index (SI) was calculated. Differences between genders were evaluated by student t-test. Predictive factors of sex were determined by discrimination analysis and receiver operating characteristic (ROC) analysis. Male sternal measurement values are significantly higher than females (P discrimination analysis, MSL has high accuracy rate with 80.2% in females and 80.9% in males. MSL also has the best sensitivity (75.9%) and specificity (87.6%) values. Accuracy rates were above 80% in 3 stepwise discrimination analysis for both sexes. Stepwise 1 (ML, MSL, S1W, S3W) has the highest accuracy rate in stepwise discrimination analysis with 86.1% in females and 83.8% in males. Our study showed that morphometric computed tomography analysis of sternum might provide important information for sex estimation.

  8. Do group-specific equations provide the best estimates of stature?

    Science.gov (United States)

    Albanese, John; Osley, Stephanie E; Tuck, Andrew

    2016-04-01

    An estimate of stature can be used by a forensic anthropologist with the preliminary identification of an unknown individual when human skeletal remains are recovered. Fordisc is a computer application that can be used to estimate stature; like many other methods it requires the user to assign an unknown individual to a specific group defined by sex, race/ancestry, and century of birth before an equation is applied. The assumption is that a group-specific equation controls for group differences and should provide the best results most often. In this paper we assess the utility and benefits of using group-specific equations to estimate stature using Fordisc. Using the maximum length of the humerus and the maximum length of the femur from individuals with documented stature, we address the question: Do sex-, race/ancestry- and century-specific stature equations provide the best results when estimating stature? The data for our sample of 19th Century White males (n=28) were entered into Fordisc and stature was estimated using 22 different equation options for a total of 616 trials: 19th and 20th Century Black males, 19th and 20th Century Black females, 19th and 20th Century White females, 19th and 20th Century White males, 19th and 20th Century any, and 20th Century Hispanic males. The equations were assessed for utility in any one case (how many times the estimated range bracketed the documented stature) and in aggregate using 1-way ANOVA and other approaches. This group-specific equation that should have provided the best results was outperformed by several other equations for both the femur and humerus. These results suggest that group-specific equations do not provide better results for estimating stature while at the same time are more difficult to apply because an unknown must be allocated to a given group before stature can be estimated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. An extended set-value observer for position estimation using single range measurements

    DEFF Research Database (Denmark)

    Marcal, Jose; Jouffroy, Jerome; Fossen, Thor I.

    the observability of the system is briefly discussed and an extended set-valued observer is presented, with some discussion about the effect of the measurements noise on the final solution. This observer estimates bounds in the errors assuming that the exogenous signals are bounded, providing a safe region......The ability of estimating the position of an underwater vehicle from single range measurements is important in applications where one transducer marks an important geographical point, when there is a limitation in the size or cost of the vehicle, or when there is a failure in a system...... of transponders. The knowledge of the bearing of the vehicle and the range measurements from a single location can provide a solution which is sensitive to the trajectory that the vehicle is following, since there is no complete constraint on the position estimate with a single beacon. In this paper...

  10. Using ²¹⁰Pb measurements to estimate sedimentation rates on river floodplains.

    Science.gov (United States)

    Du, P; Walling, D E

    2012-01-01

    Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides ¹³⁷Cs and excess ²¹⁰Pb to estimate medium-term (10-10² years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of ¹³⁷Cs. However, the use of excess ²¹⁰Pb potentially offers a number of advantages over ¹³⁷Cs measurements. Most existing investigations that have used excess ²¹⁰Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess ²¹⁰Pb and ¹³⁷Cs were made on these cores. The ²¹⁰Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The ¹³⁷Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the ²¹⁰Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total ²¹⁰Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by

  11. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    Science.gov (United States)

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  12. The uncertainties in estimating measurement uncertainties

    International Nuclear Information System (INIS)

    Clark, J.P.; Shull, A.H.

    1994-01-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties

  13. Adaptive measurement selection for progressive damage estimation

    Science.gov (United States)

    Zhou, Wenfan; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Chattopadhyay, Aditi; Peralta, Pedro

    2011-04-01

    Noise and interference in sensor measurements degrade the quality of data and have a negative impact on the performance of structural damage diagnosis systems. In this paper, a novel adaptive measurement screening approach is presented to automatically select the most informative measurements and use them intelligently for structural damage estimation. The method is implemented efficiently in a sequential Monte Carlo (SMC) setting using particle filtering. The noise suppression and improved damage estimation capability of the proposed method is demonstrated by an application to the problem of estimating progressive fatigue damage in an aluminum compact-tension (CT) sample using noisy PZT sensor measurements.

  14. Extrapolated HPGe efficiency estimates based on a single calibration measurement

    International Nuclear Information System (INIS)

    Winn, W.G.

    1994-01-01

    Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of

  15. Uncertainty estimation with a small number of measurements, part II: a redefinition of uncertainty and an estimator method

    Science.gov (United States)

    Huang, Hening

    2018-01-01

    This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation.

  16. A direct-measurement technique for estimating discharge-chamber lifetime. [for ion thrusters

    Science.gov (United States)

    Beattie, J. R.; Garvin, H. L.

    1982-01-01

    The use of short-term measurement techniques for predicting the wearout of ion thrusters resulting from sputter-erosion damage is investigated. The laminar-thin-film technique is found to provide high precision erosion-rate data, although the erosion rates are generally substantially higher than those found during long-term erosion tests, so that the results must be interpreted in a relative sense. A technique for obtaining absolute measurements is developed using a masked-substrate arrangement. This new technique provides a means for estimating the lifetimes of critical discharge-chamber components based on direct measurements of sputter-erosion depths obtained during short-duration (approximately 1 hr) tests. Results obtained using the direct-measurement technique are shown to agree with sputter-erosion depths calculated for the plasma conditions of the test. The direct-measurement approach is found to be applicable to both mercury and argon discharge-plasma environments and will be useful for estimating the lifetimes of inert gas and extended performance mercury ion thrusters currently under development.

  17. The estimation of differential counting measurements of possitive quantities with relatively large statistical errors

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1982-01-01

    Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)

  18. Composite Measures of Health Care Provider Performance: A Description of Approaches

    Science.gov (United States)

    Shwartz, Michael; Restuccia, Joseph D; Rosen, Amy K

    2015-01-01

    Context Since the Institute of Medicine’s 2001 report Crossing the Quality Chasm, there has been a rapid proliferation of quality measures used in quality-monitoring, provider-profiling, and pay-for-performance (P4P) programs. Although individual performance measures are useful for identifying specific processes and outcomes for improvement and tracking progress, they do not easily provide an accessible overview of performance. Composite measures aggregate individual performance measures into a summary score. By reducing the amount of data that must be processed, they facilitate (1) benchmarking of an organization’s performance, encouraging quality improvement initiatives to match performance against high-performing organizations, and (2) profiling and P4P programs based on an organization’s overall performance. Methods We describe different approaches to creating composite measures, discuss their advantages and disadvantages, and provide examples of their use. Findings The major issues in creating composite measures are (1) whether to aggregate measures at the patient level through all-or-none approaches or the facility level, using one of the several possible weighting schemes; (2) when combining measures on different scales, how to rescale measures (using z scores, range percentages, ranks, or 5-star categorizations); and (3) whether to use shrinkage estimators, which increase precision by smoothing rates from smaller facilities but also decrease transparency. Conclusions Because provider rankings and rewards under P4P programs may be sensitive to both context and the data, careful analysis is warranted before deciding to implement a particular method. A better understanding of both when and where to use composite measures and the incentives created by composite measures are likely to be important areas of research as the use of composite measures grows. PMID:26626986

  19. Individualized estimation of human core body temperature using noninvasive measurements.

    Science.gov (United States)

    Laxminarayan, Srinivas; Rakesh, Vineet; Oyama, Tatsuya; Kazman, Josh B; Yanovich, Ran; Ketko, Itay; Epstein, Yoram; Morrison, Shawnda; Reifman, Jaques

    2018-06-01

    A rising core body temperature (T c ) during strenuous physical activity is a leading indicator of heat-injury risk. Hence, a system that can estimate T c in real time and provide early warning of an impending temperature rise may enable proactive interventions to reduce the risk of heat injuries. However, real-time field assessment of T c requires impractical invasive technologies. To address this problem, we developed a mathematical model that describes the relationships between T c and noninvasive measurements of an individual's physical activity, heart rate, and skin temperature, and two environmental variables (ambient temperature and relative humidity). A Kalman filter adapts the model parameters to each individual and provides real-time personalized T c estimates. Using data from three distinct studies, comprising 166 subjects who performed treadmill and cycle ergometer tasks under different experimental conditions, we assessed model performance via the root mean squared error (RMSE). The individualized model yielded an overall average RMSE of 0.33 (SD = 0.18)°C, allowing us to reach the same conclusions in each study as those obtained using the T c measurements. Furthermore, for 22 unique subjects whose T c exceeded 38.5°C, a potential lower T c limit of clinical relevance, the average RMSE decreased to 0.25 (SD = 0.20)°C. Importantly, these results remained robust in the presence of simulated real-world operational conditions, yielding no more than 16% worse RMSEs when measurements were missing (40%) or laden with added noise. Hence, the individualized model provides a practical means to develop an early warning system for reducing heat-injury risk. NEW & NOTEWORTHY A model that uses an individual's noninvasive measurements and environmental variables can continually "learn" the individual's heat-stress response by automatically adapting the model parameters on the fly to provide real-time individualized core body temperature estimates. This

  20. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  1. Measurement Model Nonlinearity in Estimation of Dynamical Systems

    Science.gov (United States)

    Majji, Manoranjan; Junkins, J. L.; Turner, J. D.

    2012-06-01

    The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.

  2. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    Energy Technology Data Exchange (ETDEWEB)

    Bruschewski, Martin; Schiffer, Heinz-Peter [Technische Universitaet Darmstadt, Institute of Gas Turbines and Aerospace Propulsion, Darmstadt (Germany); Freudenhammer, Daniel [Technische Universitaet Darmstadt, Institute of Fluid Mechanics and Aerodynamics, Center of Smart Interfaces, Darmstadt (Germany); Buchenberg, Waltraud B. [University Medical Center Freiburg, Medical Physics, Department of Radiology, Freiburg (Germany); Grundmann, Sven [University of Rostock, Institute of Fluid Mechanics, Rostock (Germany)

    2016-05-15

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75% is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented. (orig.)

  3. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    Science.gov (United States)

    Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven

    2016-05-01

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.

  4. Propagation of measurement accuracy to biomass soft-sensor estimation and control quality.

    Science.gov (United States)

    Steinwandter, Valentin; Zahel, Thomas; Sagmeister, Patrick; Herwig, Christoph

    2017-01-01

    In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.

  5. Pollutant Flux Estimation in an Estuary Comparison between Model and Field Measurements

    Directory of Open Access Journals (Sweden)

    Yen-Chang Chen

    2014-08-01

    Full Text Available This study proposes a framework for estimating pollutant flux in an estuary. An efficient method is applied to estimate the flux of pollutants in an estuary. A gauging station network in the Danshui River estuary is established to measure the data of water quality and discharge based on the efficient method. A boat mounted with an acoustic Doppler profiler (ADP traverses the river along a preselected path that is normal to the streamflow to measure the velocities, water depths and water quality for calculating pollutant flux. To know the characteristics of the estuary and to provide the basis for the pollutant flux estimation model, data of complete tidal cycles is collected. The discharge estimation model applies the maximum velocity and water level to estimate mean velocity and cross-sectional area, respectively. Thus, the pollutant flux of the estuary can be easily computed as the product of the mean velocity, cross-sectional area and pollutant concentration. The good agreement between the observed and estimated pollutant flux of the Danshui River estuary shows that the pollutant measured by the conventional and the efficient methods are not fundamentally different. The proposed method is cost-effective and reliable. It can be used to estimate pollutant flux in an estuary accurately and efficiently.

  6. Real-time measurements and their effects on state estimation of distribution power system

    DEFF Research Database (Denmark)

    Han, Xue; You, Shi; Thordarson, Fannar

    2013-01-01

    between the estimated values (voltage and injected power) and the measurements are applied to evaluate the accuracy of the estimated grid states. Eventually, some suggestions are provided for the distribution grid operators on placing the real-time meters in the distribution grid.......This paper aims at analyzing the potential value of using different real-time metering and measuring instruments applied in the low voltage distribution networks for state-estimation. An algorithm is presented to evaluate different combinations of metering data using a tailored state estimator....... It is followed by a case study based on the proposed algorithm. A real distribution grid feeder with different types of meters installed either in the cabinets or at the customer side is selected for simulation and analysis. Standard load templates are used to initiate the state estimation. The deviations...

  7. Attitude and gyro bias estimation by the rotation of an inertial measurement unit

    International Nuclear Information System (INIS)

    Wu, Zheming; Sun, Zhenguo; Zhang, Wenzeng; Chen, Qiang

    2015-01-01

    In navigation applications, the presence of an unknown bias in the measurement of rate gyros is a key performance-limiting factor. In order to estimate the gyro bias and improve the accuracy of attitude measurement, we proposed a new method which uses the rotation of an inertial measurement unit, which is independent from rigid body motion. By actively changing the orientation of the inertial measurement unit (IMU), the proposed method generates sufficient relations between the gyro bias and tilt angle (roll and pitch) error via ridge body dynamics, and the gyro bias, including the bias that causes the heading error, can be estimated and compensated. The rotation inertial measurement unit method makes the gravity vector measured from the IMU continuously change in a body-fixed frame. By theoretically analyzing the mathematic model, the convergence of the attitude and gyro bias to the true values is proven. The proposed method provides a good attitude estimation using only measurements from an IMU, when other sensors such as magnetometers and GPS are unreliable. The performance of the proposed method is illustrated under realistic robotic motions and the results demonstrate an improvement in the accuracy of the attitude estimation. (paper)

  8. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  9. A brute-force spectral approach for wave estimation using measured vessel motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik D.; Brodtkorb, Astrid H.; Sørensen, Asgeir J.

    2018-01-01

    , and the procedure is simple in its mathematical formulation. The actual formulation is extending another recent work by including vessel advance speed and short-crested seas. Due to its simplicity, the procedure is computationally efficient, providing wave spectrum estimates in the order of a few seconds......The article introduces a spectral procedure for sea state estimation based on measurements of motion responses of a ship in a short-crested seaway. The procedure relies fundamentally on the wave buoy analogy, but the wave spectrum estimate is obtained in a direct - brute-force - approach......, and the estimation procedure will therefore be appealing to applications related to realtime, onboard control and decision support systems for safe and efficient marine operations. The procedure's performance is evaluated by use of numerical simulation of motion measurements, and it is shown that accurate wave...

  10. The Unscented Kalman Filter estimates the plasma insulin from glucose measurement.

    Science.gov (United States)

    Eberle, Claudia; Ament, Christoph

    2011-01-01

    Understanding the simultaneous interaction within the glucose and insulin homeostasis in real-time is very important for clinical treatment as well as for research issues. Until now only plasma glucose concentrations can be measured in real-time. To support a secure, effective and rapid treatment e.g. of diabetes a real-time estimation of plasma insulin would be of great value. A novel approach using an Unscented Kalman Filter that provides an estimate of the current plasma insulin concentration is presented, which operates on the measurement of the plasma glucose and Bergman's Minimal Model of the glucose insulin homeostasis. We can prove that process observability is obtained in this case. Hence, a successful estimator design is possible. Since the process is nonlinear we have to consider estimates that are not normally distributed. The symmetric Unscented Kalman Filter (UKF) will perform best compared to other estimator approaches as the Extended Kalman Filter (EKF), the simplex Unscented Kalman Filter (UKF), and the Particle Filter (PF). The symmetric UKF algorithm is applied to the plasma insulin estimation. It shows better results compared to the direct (open loop) estimation that uses a model of the insulin subsystem. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Estimating Jupiter’s Gravity Field Using Juno Measurements, Trajectory Estimation Analysis, and a Flow Model Optimization

    International Nuclear Information System (INIS)

    Galanti, Eli; Kaspi, Yohai; Durante, Daniele; Finocchiaro, Stefano; Iess, Luciano

    2017-01-01

    The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulated Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.

  12. Estimating Jupiter’s Gravity Field Using Juno Measurements, Trajectory Estimation Analysis, and a Flow Model Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Galanti, Eli; Kaspi, Yohai [Department of Earth and Planetary Sciences, Weizmann Institute of Science, Rehovot (Israel); Durante, Daniele; Finocchiaro, Stefano; Iess, Luciano, E-mail: eli.galanti@weizmann.ac.il [Dipartimento di Ingegneria Meccanica e Aerospaziale, Sapienza Universita di Roma, Rome (Italy)

    2017-07-01

    The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulated Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.

  13. Estimation of Fuzzy Measures Using Covariance Matrices in Gaussian Mixtures

    Directory of Open Access Journals (Sweden)

    Nishchal K. Verma

    2012-01-01

    Full Text Available This paper presents a novel computational approach for estimating fuzzy measures directly from Gaussian mixtures model (GMM. The mixture components of GMM provide the membership functions for the input-output fuzzy sets. By treating consequent part as a function of fuzzy measures, we derived its coefficients from the covariance matrices found directly from GMM and the defuzzified output constructed from both the premise and consequent parts of the nonadditive fuzzy rules that takes the form of Choquet integral. The computational burden involved with the solution of λ-measure is minimized using Q-measure. The fuzzy model whose fuzzy measures were computed using covariance matrices found in GMM has been successfully applied on two benchmark problems and one real-time electric load data of Indian utility. The performance of the resulting model for many experimental studies including the above-mentioned application is found to be better and comparable to recent available fuzzy models. The main contribution of this paper is the estimation of fuzzy measures efficiently and directly from covariance matrices found in GMM, avoiding the computational burden greatly while learning them iteratively and solving polynomial equations of order of the number of input-output variables.

  14. Attitude Estimation of Skis in Ski Jumping Using Low-Cost Inertial Measurement Units

    Directory of Open Access Journals (Sweden)

    Xiang Fang

    2018-02-01

    Full Text Available This paper presents an approach to estimate the attitude of skis for an entire ski jump using wearable, MEMS-based, low-cost Inertial Measurement Units (IMUs. First of all, a kinematic attitude model based on rigid-body dynamics and a sensor error model considering bias and scale factor error are established. Then, an extended Rauch-Tung-Striebel (RTS smoother is used to combine measurement data provided by both gyroscope and magnetometer to achieve an attitude estimation. Moreover, parameters for the bias and scale factor error in the sensor error model and the initial attitude are determined via a maximum-likelihood principle based parameter estimation algorithm. By implementing this approach, an attitude estimation of skis is achieved without further sensor calibration. Finally, results based on both the simulated reference data and the real experimental measurement data are presented, which proves the practicability and the validity of the proposed approach.

  15. Annual sediment flux estimates in a tidal strait using surrogate measurements

    Science.gov (United States)

    Ganju, N.K.; Schoellhamer, D.H.

    2006-01-01

    Annual suspended-sediment flux estimates through Carquinez Strait (the seaward boundary of Suisun Bay, California) are provided based on surrogate measurements for advective, dispersive, and Stokes drift flux. The surrogates are landward watershed discharge, suspended-sediment concentration at one location in the Strait, and the longitudinal salinity gradient. The first two surrogates substitute for tidally averaged discharge and velocity-weighted suspended-sediment concentration in the Strait, thereby providing advective flux estimates, while Stokes drift is estimated with suspended-sediment concentration alone. Dispersive flux is estimated using the product of longitudinal salinity gradient and the root-mean-square value of velocity-weighted suspended-sediment concentration as an added surrogate variable. Cross-sectional measurements validated the use of surrogates during the monitoring period. During high freshwater flow advective and dispersive flux were in the seaward direction, while landward dispersive flux dominated and advective flux approached zero during low freshwater flow. Stokes drift flux was consistently in the landward direction. Wetter than average years led to net export from Suisun Bay, while dry years led to net sediment import. Relatively low watershed sediment fluxes to Suisun Bay contribute to net export during the wet season, while gravitational circulation in Carquinez Strait and higher suspended-sediment concentrations in San Pablo Bay (seaward end of Carquinez Strait) are responsible for the net import of sediment during the dry season. Annual predictions of suspended-sediment fluxes, using these methods, will allow for a sediment budget for Suisun Bay, which has implications for marsh restoration and nutrient/contaminant transport. These methods also provide a general framework for estimating sediment fluxes in estuarine environments, where temporal and spatial variability of transport are large. ?? 2006 Elsevier Ltd. All rights

  16. Uncertainty estimation of ultrasonic thickness measurement

    International Nuclear Information System (INIS)

    Yassir Yassen, Abdul Razak Daud; Mohammad Pauzi Ismail; Abdul Aziz Jemain

    2009-01-01

    The most important factor that should be taken into consideration when selecting ultrasonic thickness measurement technique is its reliability. Only when the uncertainty of a measurement results is known, it may be judged if the result is adequate for intended purpose. The objective of this study is to model the ultrasonic thickness measurement function, to identify the most contributing input uncertainty components, and to estimate the uncertainty of the ultrasonic thickness measurement results. We assumed that there are five error sources significantly contribute to the final error, these sources are calibration velocity, transit time, zero offset, measurement repeatability and resolution, by applying the propagation of uncertainty law to the model function, a combined uncertainty of the ultrasonic thickness measurement was obtained. In this study the modeling function of ultrasonic thickness measurement was derived. By using this model the estimation of the uncertainty of the final output result was found to be reliable. It was also found that the most contributing input uncertainty components are calibration velocity, transit time linearity and zero offset. (author)

  17. Seasonal estimates of riparian evapotranspiration using remote and in situ measurements

    Science.gov (United States)

    Goodrich, D.C.; Scott, R.; Qi, J.; Goff, B.; Unkrich, C.L.; Moran, M.S.; Williams, D.; Schaeffer, S.; Snyder, K.; MacNish, R.; Maddock, T.; Pool, D.; Chehbouni, A.; Cooper, D.I.; Eichinger, W.E.; Shuttleworth, W.J.; Kerr, Y.; Marsett, R.; Ni, W.

    2000-01-01

    In many semi-arid basins during extended periods when surface snowmelt or storm runoff is absent, groundwater constitutes the primary water source for human habitation, agriculture and riparian ecosystems. Utilizing regional groundwater models in the management of these water resources requires accurate estimates of basin boundary conditions. A critical groundwater boundary condition that is closely coupled to atmospheric processes and is typically known with little certainty is seasonal riparian evapotranspiration ET). This quantity can often be a significant factor in the basin water balance in semi-arid regions yet is very difficult to estimate over a large area. Better understanding and quantification of seasonal, large-area riparian ET is a primary objective of the Semi-Arid Land-Surface-Atmosphere (SALSA) Program. To address this objective, a series of interdisciplinary experimental Campaigns were conducted in 1997 in the San Pedro Basin in southeastern Arizona. The riparian system in this basin is primarily made up of three vegetation communities: mesquite (Prosopis velutina), sacaton grasses (Sporobolus wrightii), and a cottonwood (Populus fremontii)/willow (Salix goodingii) forest gallery. Micrometeorological measurement techniques were used to estimate ET from the mesquite and grasses. These techniques could not be utilized to estimate fluxes from the cottonwood/willow (C/W) forest gallery due to the height (20-30 m) and non-uniform linear nature of the forest gallery. Short-term (2-4 days) sap flux measurements were made to estimate canopy transpiration over several periods of the riparian growing season. Simultaneous remote sensing measurements were used to spatially extrapolate tree and stand measurements. Scaled C/W stand level sap flux estimates were utilized to calibrate a Penman-Monteith model to enable temporal extrapolation between Synoptic measurement periods. With this model and set of measurements, seasonal riparian vegetation water use

  18. Real-Time Aerodynamic Parameter Estimation without Air Flow Angle Measurements

    Science.gov (United States)

    Morelli, Eugene A.

    2010-01-01

    A technique for estimating aerodynamic parameters in real time from flight data without air flow angle measurements is described and demonstrated. The method is applied to simulated F-16 data, and to flight data from a subscale jet transport aircraft. Modeling results obtained with the new approach using flight data without air flow angle measurements were compared to modeling results computed conventionally using flight data that included air flow angle measurements. Comparisons demonstrated that the new technique can provide accurate aerodynamic modeling results without air flow angle measurements, which are often difficult and expensive to obtain. Implications for efficient flight testing and flight safety are discussed.

  19. Accuracy of the visual estimation method as a predictor of food intake in Alzheimer's patients provided with different types of food.

    Science.gov (United States)

    Amano, Nobuko; Nakamura, Tomiyo

    2018-02-01

    The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  20. Integrating field plots, lidar, and landsat time series to provide temporally consistent annual estimates of biomass from 1990 to present

    Science.gov (United States)

    Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan. Huang

    2015-01-01

    We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...

  1. Influence of the statistical distribution of bioassay measurement errors on the intake estimation

    International Nuclear Information System (INIS)

    Lee, T. Y; Kim, J. K

    2006-01-01

    The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution

  2. Uncertainty Measures of Regional Flood Frequency Estimators

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik

    1995-01-01

    Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...

  3. Poverty among Foster Children: Estimates Using the Supplemental Poverty Measure

    Science.gov (United States)

    Pac, Jessica; Waldfogel, Jane; Wimer, Christopher

    2017-01-01

    We use data from the Current Population Survey and the new Supplemental Poverty Measure (SPM) to provide estimates for poverty among foster children over the period 1992 to 2013. These are the first large-scale national estimates for foster children who are not included in official poverty statistics. Holding child and family demographics constant, foster children have a lower risk of poverty than other children. Analyzing income in detail suggests that foster care payments likely play an important role in reducing the risk of poverty in this group. In contrast, we find that children living with grandparents have a higher risk of poverty than other children, even after taking demographics into account. Our estimates suggest that this excess risk is likely linked to their lower likelihood of receiving foster care or other income supports. PMID:28659651

  4. Estimation of measurement variance in the context of environment statistics

    Science.gov (United States)

    Maiti, Pulakesh

    2015-02-01

    The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.

  5. Location Estimation using Delayed Measurements

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Nørgård, Peter Magnus

    1998-01-01

    When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new techn...... technique to a sensor fusion system for estimating the location of an autonomous guided vehicle. The system fuses encoder and vision measurements in an extended Kalman filter. Results from experiments in a real environment are reported...

  6. Time-dependent inversion estimates of global biomass-burning CO emissions using Measurement of Pollution in the Troposphere (MOPITT) measurements

    Science.gov (United States)

    Arellano, Avelino F.; Kasibhatla, Prasad S.; Giglio, Louis; van der Werf, Guido R.; Randerson, James T.; Collatz, G. James

    2006-05-01

    We present an inverse-modeling analysis of CO emissions using column CO retrievals from the Measurement of Pollution in the Troposphere (MOPITT) instrument and a global chemical transport model (GEOS-CHEM). We first focus on the information content of MOPITT CO column retrievals in terms of constraining CO emissions associated with biomass burning and fossil fuel/biofuel use. Our analysis shows that seasonal variation of biomass-burning CO emissions in Africa, South America, and Southeast Asia can be characterized using monthly mean MOPITT CO columns. For the fossil fuel/biofuel source category the derived monthly mean emission estimates are noisy even when the error statistics are accurately known, precluding a characterization of seasonal variations of regional CO emissions for this source category. The derived estimate of CO emissions from biomass burning in southern Africa during the June-July 2000 period is significantly higher than the prior estimate (prior, 34 Tg; posterior, 13 Tg). We also estimate that emissions are higher relative to the prior estimate in northern Africa during December 2000 to January 2001 and lower relative to the prior estimate in Central America and Oceania/Indonesia during April-May and September-October 2000, respectively. While these adjustments provide better agreement of the model with MOPITT CO column fields and with independent measurements of surface CO from National Oceanic and Atmospheric Administration Climate Monitoring and Diagnostics Laboratory at background sites in the Northern Hemisphere, some systematic differences between modeled and measured CO fields persist, including model overestimation of background surface CO in the Southern Hemisphere. Characterizing and accounting for underlying biases in the measurement model system are needed to improve the robustness of the top-down estimates.

  7. Smile line assessment comparing quantitative measurement and visual estimation.

    Science.gov (United States)

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  8. Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases.

    Science.gov (United States)

    Pezzè, Luca; Ciampini, Mario A; Spagnolo, Nicolò; Humphreys, Peter C; Datta, Animesh; Walmsley, Ian A; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto

    2017-09-29

    A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.

  9. Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases

    Science.gov (United States)

    Pezzè, Luca; Ciampini, Mario A.; Spagnolo, Nicolò; Humphreys, Peter C.; Datta, Animesh; Walmsley, Ian A.; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto

    2017-09-01

    A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.

  10. Effect of Smart Meter Measurements Data On Distribution State Estimation

    DEFF Research Database (Denmark)

    Pokhrel, Basanta Raj; Nainar, Karthikeyan; Bak-Jensen, Birgitte

    2018-01-01

    Smart distribution grids with renewable energy based generators and demand response resources (DRR) requires accurate state estimators for real time control. Distribution grid state estimators are normally based on accumulated smart meter measurements. However, increase of measurements in the phy......Smart distribution grids with renewable energy based generators and demand response resources (DRR) requires accurate state estimators for real time control. Distribution grid state estimators are normally based on accumulated smart meter measurements. However, increase of measurements...... in the physical grid can enforce significant stress not only on the communication infrastructure but also in the control algorithms. This paper aims to propose a methodology to analyze needed real time smart meter data from low voltage distribution grids and their applicability in distribution state estimation...

  11. Front-Crawl Instantaneous Velocity Estimation Using a Wearable Inertial Measurement Unit

    Directory of Open Access Journals (Sweden)

    Kamiar Aminian

    2012-09-01

    Full Text Available Monitoring the performance is a crucial task for elite sports during both training and competition. Velocity is the key parameter of performance in swimming, but swimming performance evaluation remains immature due to the complexities of measurements in water. The purpose of this study is to use a single inertial measurement unit (IMU to estimate front crawl velocity. Thirty swimmers, equipped with an IMU on the sacrum, each performed four different velocity trials of 25 m in ascending order. A tethered speedometer was used as the velocity measurement reference. Deployment of biomechanical constraints of front crawl locomotion and change detection framework on acceleration signal paved the way for a drift-free integration of forward acceleration using IMU to estimate the swimmers velocity. A difference of 0.6 ± 5.4 cm·s−1 on mean cycle velocity and an RMS difference of 11.3 cm·s−1 in instantaneous velocity estimation were observed between IMU and the reference. The most important contribution of the study is a new practical tool for objective evaluation of swimming performance. A single body-worn IMU provides timely feedback for coaches and sport scientists without any complicated setup or restraining the swimmer’s natural technique.

  12. Scaling measurements of metabolism in stream ecosystems: challenges and approaches to estimating reaeration

    Science.gov (United States)

    Bowden, W. B.; Parker, S.; Song, C.

    2016-12-01

    Stream ecologists have used various formulations of an oxygen budget approach as a surrogate to measure "whole-stream metabolism" (WSM) of carbon in rivers and streams. Improvements in sensor technologies that provide reliable, high-frequency measurements of dissolved oxygen concentrations in adverse field conditions has made it much easier to acquire the basic data needed to estimate WSM in remote locations over long periods (weeks to months). However, accurate estimates of WSM require reliable measurements or estimates of the reaeration coefficient (k). Small errors in estimates of k can lead to large errors in estimates of gross ecosystem production and ecosystem respiration and so the magnitude of the biological flux of CO2 to or from streams. This is an especially challenging problem in unproductive, oligotrophic streams. Unfortunately, current methods to measure reaeration directly (gas evasion) are expensive, labor-intensive, and time-consuming. As a consequence, there is a substantial mismatch between the time steps at which we can measure reaeration versus most of the other variables required to calculate WSM. As a part of the NSF Arctic Long-Term Ecological Research Project we have refined methods to measure WSM in Arctic streams and found a good relationship between measured k values and those calculated by the Energy Dissipation Model (EDM). Other researchers have also noted that this equation works well for both low- and high-order streams. The EDM is dependent on stream slope (relatively constant) and velocity (which is related to discharge or stage). These variables are easy to measure and can be used to estimate k a high frequency (minutes) over large areas (river networks). As a key part of the NSF MacroSystems Biology SCALER project we calculated WSM for multiple reaches in nested stream networks in six biomes across the United States and Australia. We calculated k by EDM and fitted k via a Bayesian model for WSM. The relationships between

  13. Ice thickness measurements and volume estimates for glaciers in Norway

    Science.gov (United States)

    Andreassen, Liss M.; Huss, Matthias; Melvold, Kjetil; Elvehøy, Hallgeir; Winsvold, Solveig H.

    2014-05-01

    Whereas glacier areas in many mountain regions around the world now are well surveyed using optical satellite sensors and available in digital inventories, measurements of ice thickness are sparse in comparison and a global dataset does not exist. Since the 1980s ice thickness measurements have been carried out by ground penetrating radar on many glaciers in Norway, often as part of contract work for hydropower companies with the aim to calculate hydrological divides of ice caps. Measurements have been conducted on numerous glaciers, covering the largest ice caps as well as a few smaller mountain glaciers. However, so far no ice volume estimate for Norway has been derived from these measurements. Here, we give an overview of ice thickness measurements in Norway, and use a distributed model to interpolate and extrapolate the data to provide an ice volume estimate of all glaciers in Norway. We also compare the results to various volume-area/thickness-scaling approaches using values from the literature as well as scaling constants we obtained from ice thickness measurements in Norway. Glacier outlines from a Landsat-derived inventory from 1999-2006 together with a national digital elevation model were used as input data for the ice volume calculations. The inventory covers all glaciers in mainland Norway and consists of 2534 glaciers (3143 glacier units) covering an area of 2692 km2 ± 81 km2. To calculate the ice thickness distribution of glaciers in Norway we used a distributed model which estimates surface mass balance distribution, calculates the volumetric balance flux and converts it into thickness using the flow law for ice. We calibrated this model with ice thickness data for Norway, mainly by adjusting the mass balance gradient. Model results generally agree well with the measured values, however, larger deviations were found for some glaciers. The total ice volume of Norway was estimated to be 275 km3 ± 30 km3. From the ice thickness data set we selected

  14. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  15. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  16. Measuring Provider Performance for Physicians Participating in the Merit-Based Incentive Payment System.

    Science.gov (United States)

    Squitieri, Lee; Chung, Kevin C

    2017-07-01

    In 2017, the Centers for Medicare and Medicaid Services began requiring all eligible providers to participate in the Quality Payment Program or face financial reimbursement penalty. The Quality Payment Program outlines two paths for provider participation: the Merit-Based Incentive Payment System and Advanced Alternative Payment Models. For the first performance period beginning in January of 2017, the Centers for Medicare and Medicaid Services estimates that approximately 83 to 90 percent of eligible providers will not qualify for participation in an Advanced Alternative Payment Model and therefore must participate in the Merit-Based Incentive Payment System program. The Merit-Based Incentive Payment System path replaces existing quality-reporting programs and adds several new measures to evaluate providers using four categories of data: (1) quality, (2) cost/resource use, (3) improvement activities, and (4) advancing care information. These categories will be combined to calculate a weighted composite score for each provider or provider group. Composite Merit-Based Incentive Payment System scores based on 2017 performance data will be used to adjust reimbursed payment in 2019. In this article, the authors provide relevant background for understanding value-based provider performance measurement. The authors also discuss Merit-Based Incentive Payment System reporting requirements and scoring methodology to provide plastic surgeons with the necessary information to critically evaluate their own practice capabilities in the context of current performance metrics under the Quality Payment Program.

  17. West African donkey's liveweight estimation using body measurements

    Directory of Open Access Journals (Sweden)

    Pierre Claver Nininahazwe

    2017-10-01

    Full Text Available Aim: The objective of this study was to determine a formula for estimating the liveweight in West African donkeys. Materials and Methods: Liveweight and a total of 6 body measurements were carried out on 1352 donkeys from Burkina Faso, Mali, Niger, and Senegal. The correlations between liveweight and body measurements were determined, and the most correlated body measurements with liveweight were used to establish regression lines. Results: The average weight of a West African donkey was 126.0±17.1 kg, with an average height at the withers of 99.5±3.67 cm; its body length was 104.4±6.53 cm, and a heart girth (HG of 104.4±6.53 cm. After analyzing the various regression lines and correlations, it was found that the HG could better estimate the liveweight of West African donkeys by simple linear regression method. Indeed, the liveweight (LW showed a better correlation with the HG (R2=0.81. The following formulas (Equations 1 and 2 could be used to estimate the LW of West Africa donkeys. Equation 1: Estimated LW (kg = 2.55 x HG (cm - 153.49; Equation 2: Estimated LW (kg = Heart girth (cm2.68 / 2312.44. Conclusion: The above formulas could be used to manufacture weighing tape to be utilized by veterinary clinicians and farmers to estimate donkey's weight in the view of medication and adjustment of load.

  18. Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.

    Science.gov (United States)

    Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja

    2015-06-01

    Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.

  19. Online wave estimation using vessel motion measurements

    DEFF Research Database (Denmark)

    H. Brodtkorb, Astrid; Nielsen, Ulrik D.; J. Sørensen, Asgeir

    2018-01-01

    parameters and motion transfer functions are required as input. Apart from this the method is signal-based, with no assumptions on the wave spectrum shape, and as a result it is computationally efficient. The algorithm is implemented in a dynamic positioning (DP)control system, and tested through simulations......In this paper, a computationally efficient online sea state estimation algorithm isproposed for estimation of the on site sea state. The algorithm finds the wave spectrum estimate from motion measurements in heave, roll and pitch by iteratively solving a set of linear equations. The main vessel...

  20. Subjective Quality Measurement of Speech Its Evaluation, Estimation and Applications

    CERN Document Server

    Kondo, Kazuhiro

    2012-01-01

    It is becoming crucial to accurately estimate and monitor speech quality in various ambient environments to guarantee high quality speech communication. This practical hands-on book shows speech intelligibility measurement methods so that the readers can start measuring or estimating speech intelligibility of their own system. The book also introduces subjective and objective speech quality measures, and describes in detail speech intelligibility measurement methods. It introduces a diagnostic rhyme test which uses rhyming word-pairs, and includes: An investigation into the effect of word familiarity on speech intelligibility. Speech intelligibility measurement of localized speech in virtual 3-D acoustic space using the rhyme test. Estimation of speech intelligibility using objective measures, including the ITU standard PESQ measures, and automatic speech recognizers.

  1. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments.

    Science.gov (United States)

    Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng

    2018-03-05

    The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Metric Indices for Performance Evaluation of a Mixed Measurement based State Estimator

    Directory of Open Access Journals (Sweden)

    Paula Sofia Vide

    2013-01-01

    Full Text Available With the development of synchronized phasor measurement technology in recent years, it gains great interest the use of PMU measurements to improve state estimation performances due to their synchronized characteristics and high data transmission speed. The ability of the Phasor Measurement Units (PMU to directly measure the system state is a key over SCADA measurement system. PMU measurements are superior to the conventional SCADA measurements in terms of resolution and accuracy. Since the majority of measurements in existing estimators are from conventional SCADA measurement system, it is hard to be fully replaced by PMUs in the near future so state estimators including both phasor and conventional SCADA measurements are being considered. In this paper, a mixed measurement (SCADA and PMU measurements state estimator is proposed. Several useful measures for evaluating various aspects of the performance of the mixed measurement state estimator are proposed and explained. State Estimator validity, performance and characteristics of the results on IEEE 14 bus test system and IEEE 30 bus test system are presented.

  3. [Measurement and estimation methods and research progress of snow evaporation in forests].

    Science.gov (United States)

    Li, Hui-Dong; Guan, De-Xin; Jin, Chang-Jie; Wang, An-Zhi; Yuan, Feng-Hui; Wu, Jia-Bing

    2013-12-01

    Accurate measurement and estimation of snow evaporation (sublimation) in forests is one of the important issues to the understanding of snow surface energy and water balance, and it is also an essential part of regional hydrological and climate models. This paper summarized the measurement and estimation methods of snow evaporation in forests, and made a comprehensive applicability evaluation, including mass-balance methods (snow water equivalent method, comparative measurements of snowfall and through-snowfall, snow evaporation pan, lysimeter, weighing of cut tree, weighing interception on crown, and gamma-ray attenuation technique) and micrometeorological methods (Bowen-ratio energy-balance method, Penman combination equation, aerodynamics method, surface temperature technique and eddy covariance method). Also this paper reviewed the progress of snow evaporation in different forests and its influencal factors. At last, combining the deficiency of past research, an outlook for snow evaporation rearch in forests was presented, hoping to provide a reference for related research in the future.

  4. Underwater Acoustic Measurements to Estimate Wind and Rainfall in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    Sara Pensieri

    2015-01-01

    Full Text Available Oceanic ambient noise measurements can be analyzed to obtain qualitative and quantitative information about wind and rainfall phenomena over the ocean filling the existing gap of reliable meteorological observations at sea. The Ligurian Sea Acoustic Experiment was designed to collect long-term synergistic observations from a passive acoustic recorder and surface sensors (i.e., buoy mounted rain gauge and anemometer and weather radar to support error analysis of rainfall rate and wind speed quantification techniques developed in past studies. The study period included combination of high and low wind and rainfall episodes and two storm events that caused two floods in the vicinity of La Spezia and in the city of Genoa in 2011. The availability of high resolution in situ meteorological data allows improving data processing technique to detect and especially to provide effective estimates of wind and rainfall at sea. Results show a very good correspondence between estimates provided by passive acoustic recorder algorithm and in situ observations for both rainfall and wind phenomena and demonstrate the potential of using measurements provided by passive acoustic instruments in open sea for early warning of approaching coastal storms, which for the Mediterranean coastal areas constitutes one of the main causes of recurrent floods.

  5. Mathematical modeling for corrosion environment estimation based on concrete resistivity measurement directly above reinforcement

    International Nuclear Information System (INIS)

    Lim, Young-Chul; Lee, Han-Seung; Noguchi, Takafumi

    2009-01-01

    This study aims to formulate a resistivity model whereby the concrete resistivity expressing the environment of steel reinforcement can be directly estimated and evaluated based on measurement immediately above reinforcement as a method of evaluating corrosion deterioration in reinforced concrete structures. It also aims to provide a theoretical ground for the feasibility of durability evaluation by electric non-destructive techniques with no need for chipping of cover concrete. This Resistivity Estimation Model (REM), which is a mathematical model using the mirror method, combines conventional four-electrode measurement of resistivity with geometric parameters including cover depth, bar diameter, and electrode intervals. This model was verified by estimation using this model at areas directly above reinforcement and resistivity measurement at areas unaffected by reinforcement in regard to the assessment of the concrete resistivity. Both results strongly correlated, proving the validity of this model. It is expected to be applicable to laboratory study and field diagnosis regarding reinforcement corrosion. (author)

  6. Covariance-Based Estimation from Multisensor Delayed Measurements with Random Parameter Matrices and Correlated Noises

    Directory of Open Access Journals (Sweden)

    R. Caballero-Águila

    2014-01-01

    Full Text Available The optimal least-squares linear estimation problem is addressed for a class of discrete-time multisensor linear stochastic systems subject to randomly delayed measurements with different delay rates. For each sensor, a different binary sequence is used to model the delay process. The measured outputs are perturbed by both random parameter matrices and one-step autocorrelated and cross correlated noises. Using an innovation approach, computationally simple recursive algorithms are obtained for the prediction, filtering, and smoothing problems, without requiring full knowledge of the state-space model generating the signal process, but only the information provided by the delay probabilities and the mean and covariance functions of the processes (signal, random parameter matrices, and noises involved in the observation model. The accuracy of the estimators is measured by their error covariance matrices, which allow us to analyze the estimator performance in a numerical simulation example that illustrates the feasibility of the proposed algorithms.

  7. Visual estimation versus gravimetric measurement of postpartum blood loss: a prospective cohort study.

    Science.gov (United States)

    Al Kadri, Hanan M F; Al Anazi, Bedayah K; Tamim, Hani M

    2011-06-01

    One of the major problems in international literature is how to measure postpartum blood loss with accuracy. We aimed in this research to assess the accuracy of visual estimation of postpartum blood loss (by each of two main health-care providers) compared with the gravimetric calculation method. We carried out a prospective cohort study at King Abdulaziz Medical City, Riyadh, Saudi Arabia between 1 November 2009 and 31 December 2009. All women who were admitted to labor and delivery suite and delivered vaginally were included in the study. Postpartum blood loss was visually estimated by the attending physician and obstetrics nurse and then objectively calculated by a gravimetric machine. Comparison between the three methods of blood loss calculation was carried out. A total of 150 patients were included in this study. There was a significant difference between the gravimetric calculated blood loss and both health-care providers' estimation with a tendency to underestimate the loss by about 30%. The background and seniority of the assessing health-care provider did not affect the accuracy of the estimation. The corrected incidence of postpartum hemorrhage in Saudi Arabia was found to be 1.47%. Health-care providers tend to underestimate the volume of postpartum blood loss by about 30%. Training and continuous auditing of the diagnosis of postpartum hemorrhage is needed to avoid missing cases and thus preventing associated morbidity and mortality.

  8. Physical Activity in Vietnam: Estimates and Measurement Issues.

    Science.gov (United States)

    Bui, Tan Van; Blizzard, Christopher Leigh; Luong, Khue Ngoc; Truong, Ngoc Le Van; Tran, Bao Quoc; Otahal, Petr; Srikanth, Velandai; Nelson, Mark Raymond; Au, Thuy Bich; Ha, Son Thai; Phung, Hai Ngoc; Tran, Mai Hoang; Callisaya, Michele; Gall, Seana

    2015-01-01

    Our aims were to provide the first national estimates of physical activity (PA) for Vietnam, and to investigate issues affecting their accuracy. Measurements were made using the Global Physical Activity Questionnaire (GPAQ) on a nationally-representative sample of 14706 participants (46.5% males, response 64.1%) aged 25-64 years selected by multi-stage stratified cluster sampling. Approximately 20% of Vietnamese people had no measureable PA during a typical week, but 72.9% (men) and 69.1% (women) met WHO recommendations for PA by adults for their age. On average, 52.0 (men) and 28.0 (women) Metabolic Equivalent Task (MET)-hours/week (largely from work activities) were reported. Work and total PA were higher in rural areas and varied by season. Less than 2% of respondents provided incomplete information, but an additional one-in-six provided unrealistically high values of PA. Those responsible for reporting errors included persons from rural areas and all those with unstable work patterns. Box-Cox transformation (with an appropriate constant added) was the most successful method of reducing the influence of large values, but energy-scaled values were most strongly associated with pathophysiological outcomes. Around seven-in-ten Vietnamese people aged 25-64 years met WHO recommendations for total PA, which was mainly from work activities and higher in rural areas. Nearly all respondents were able to report their activity using the GPAQ, but with some exaggerated values and seasonal variation in reporting. Data transformation provided plausible summary values, but energy-scaling fared best in association analyses.

  9. Physical Activity in Vietnam: Estimates and Measurement Issues.

    Directory of Open Access Journals (Sweden)

    Tan Van Bui

    Full Text Available Our aims were to provide the first national estimates of physical activity (PA for Vietnam, and to investigate issues affecting their accuracy.Measurements were made using the Global Physical Activity Questionnaire (GPAQ on a nationally-representative sample of 14706 participants (46.5% males, response 64.1% aged 25-64 years selected by multi-stage stratified cluster sampling.Approximately 20% of Vietnamese people had no measureable PA during a typical week, but 72.9% (men and 69.1% (women met WHO recommendations for PA by adults for their age. On average, 52.0 (men and 28.0 (women Metabolic Equivalent Task (MET-hours/week (largely from work activities were reported. Work and total PA were higher in rural areas and varied by season. Less than 2% of respondents provided incomplete information, but an additional one-in-six provided unrealistically high values of PA. Those responsible for reporting errors included persons from rural areas and all those with unstable work patterns. Box-Cox transformation (with an appropriate constant added was the most successful method of reducing the influence of large values, but energy-scaled values were most strongly associated with pathophysiological outcomes.Around seven-in-ten Vietnamese people aged 25-64 years met WHO recommendations for total PA, which was mainly from work activities and higher in rural areas. Nearly all respondents were able to report their activity using the GPAQ, but with some exaggerated values and seasonal variation in reporting. Data transformation provided plausible summary values, but energy-scaling fared best in association analyses.

  10. Lake Evaporation in a Hyper-Arid Environment, Northwest of China—Measurement and Estimation

    OpenAIRE

    Xiao Liu; Jingjie Yu; Ping Wang; Yichi Zhang; Chaoyang Du

    2016-01-01

    Lake evaporation is a critical component of the hydrological cycle. Quantifying lake evaporation in hyper-arid regions by measurement and estimation can both provide reliable potential evaporation (ET0) reference and promote a deeper understanding of the regional hydrological process and its response towards changing climate. We placed a floating E601 evaporation pan on East Juyan Lake, which is representative of arid regions’ terminal lakes, to measure daily evaporation and conducted simulta...

  11. New measure of insulin sensitivity predicts cardiovascular disease better than HOMA estimated insulin resistance.

    Directory of Open Access Journals (Sweden)

    Kavita Venkataraman

    Full Text Available CONTEXT: Accurate assessment of insulin sensitivity may better identify individuals at increased risk of cardio-metabolic diseases. OBJECTIVES: To examine whether a combination of anthropometric, biochemical and imaging measures can better estimate insulin sensitivity index (ISI and provide improved prediction of cardio-metabolic risk, in comparison to HOMA-IR. DESIGN AND PARTICIPANTS: Healthy male volunteers (96 Chinese, 80 Malay, 77 Indian, 21 to 40 years, body mass index 18-30 kg/m(2. Predicted ISI (ISI-cal was generated using 45 randomly selected Chinese through stepwise multiple linear regression, and validated in the rest using non-parametric correlation (Kendall's tau τ. In an independent longitudinal cohort, ISI-cal and HOMA-IR were compared for prediction of diabetes and cardiovascular disease (CVD, using ROC curves. SETTING: The study was conducted in a university academic medical centre. OUTCOME MEASURES: ISI measured by hyperinsulinemic euglycemic glucose clamp, along with anthropometric measurements, biochemical assessment and imaging; incident diabetes and CVD. RESULTS: A combination of fasting insulin, serum triglycerides and waist-to-hip ratio (WHR provided the best estimate of clamp-derived ISI (adjusted R(2 0.58 versus 0.32 HOMA-IR. In an independent cohort, ROC areas under the curve were 0.77±0.02 ISI-cal versus 0.76±0.02 HOMA-IR (p>0.05 for incident diabetes, and 0.74±0.03 ISI-cal versus 0.61±0.03 HOMA-IR (p<0.001 for incident CVD. ISI-cal also had greater sensitivity than defined metabolic syndrome in predicting CVD, with a four-fold increase in the risk of CVD independent of metabolic syndrome. CONCLUSIONS: Triglycerides and WHR, combined with fasting insulin levels, provide a better estimate of current insulin resistance state and improved identification of individuals with future risk of CVD, compared to HOMA-IR. This may be useful for estimating insulin sensitivity and cardio-metabolic risk in clinical and

  12. An angle-dependent estimation of CT x-ray spectrum from rotational transmission measurements

    International Nuclear Information System (INIS)

    Lin, Yuan; Samei, Ehsan; Ramirez-Giraldo, Juan Carlos; Gauthier, Daniel J.; Stierstorfer, Karl

    2014-01-01

    Purpose: Computed tomography (CT) performance as well as dose and image quality is directly affected by the x-ray spectrum. However, the current assessment approaches of the CT x-ray spectrum require costly measurement equipment and complicated operational procedures, and are often limited to the spectrum corresponding to the center of rotation. In order to address these limitations, the authors propose an angle-dependent estimation technique, where the incident spectra across a wide range of angular trajectories can be estimated accurately with only a single phantom and a single axial scan in the absence of the knowledge of the bowtie filter. Methods: The proposed technique uses a uniform cylindrical phantom, made of ultra-high-molecular-weight polyethylene and positioned in an off-centered geometry. The projection data acquired with an axial scan have a twofold purpose. First, they serve as a reflection of the transmission measurements across different angular trajectories. Second, they are used to reconstruct the cross sectional image of the phantom, which is then utilized to compute the intersection length of each transmission measurement. With each CT detector element recording a range of transmission measurements for a single angular trajectory, the spectrum is estimated for that trajectory. A data conditioning procedure is used to combine information from hundreds of collected transmission measurements to accelerate the estimation speed, to reduce noise, and to improve estimation stability. The proposed spectral estimation technique was validated experimentally using a clinical scanner (Somatom Definition Flash, Siemens Healthcare, Germany) with spectra provided by the manufacturer serving as the comparison standard. Results obtained with the proposed technique were compared against those obtained from a second conventional transmission measurement technique with two materials (i.e., Cu and Al). After validation, the proposed technique was applied to measure

  13. Bad data detection in two stage estimation using phasor measurements

    Science.gov (United States)

    Tarali, Aditya

    The ability of the Phasor Measurement Unit (PMU) to directly measure the system state, has led to steady increase in the use of PMU in the past decade. However, in spite of its high accuracy and the ability to measure the states directly, they cannot completely replace the conventional measurement units due to high cost. Hence it is necessary for the modern estimators to use both conventional and phasor measurements together. This thesis presents an alternative method to incorporate the new PMU measurements into the existing state estimator in a systematic manner such that no major modification is necessary to the existing algorithm. It is also shown that if PMUs are placed appropriately, the phasor measurements can be used to detect and identify the bad data associated with critical measurements by using this model, which cannot be detected by conventional state estimation algorithm. The developed model is tested on IEEE 14, IEEE 30 and IEEE 118 bus under various conditions.

  14. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    Science.gov (United States)

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  15. Validating the use of 137Cs and 210Pbex measurements to estimate rates of soil loss from cultivated land in southern Italy

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.

    2012-01-01

    Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ( 137 Cs) and excess lead-210 ( 210 Pb ex ), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by 137 Cs and 210 Pb ex measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by 137 Cs and 210 Pb ex measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of 137 Cs and 210 Pb ex measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil loss and

  16. An Adaptive Low-Cost INS/GNSS Tightly-Coupled Integration Architecture Based on Redundant Measurement Noise Covariance Estimation.

    Science.gov (United States)

    Li, Zheng; Zhang, Hai; Zhou, Qifan; Che, Huan

    2017-09-05

    The main objective of the introduced study is to design an adaptive Inertial Navigation System/Global Navigation Satellite System (INS/GNSS) tightly-coupled integration system that can provide more reliable navigation solutions by making full use of an adaptive Kalman filter (AKF) and satellite selection algorithm. To achieve this goal, we develop a novel redundant measurement noise covariance estimation (RMNCE) theorem, which adaptively estimates measurement noise properties by analyzing the difference sequences of system measurements. The proposed RMNCE approach is then applied to design both a modified weighted satellite selection algorithm and a type of adaptive unscented Kalman filter (UKF) to improve the performance of the tightly-coupled integration system. In addition, an adaptive measurement noise covariance expanding algorithm is developed to mitigate outliers when facing heavy multipath and other harsh situations. Both semi-physical simulation and field experiments were conducted to evaluate the performance of the proposed architecture and were compared with state-of-the-art algorithms. The results validate that the RMNCE provides a significant improvement in the measurement noise covariance estimation and the proposed architecture can improve the accuracy and reliability of the INS/GNSS tightly-coupled systems. The proposed architecture can effectively limit positioning errors under conditions of poor GNSS measurement quality and outperforms all the compared schemes.

  17. A procedure for the estimation over time of metabolic fluxes in scenarios where measurements are uncertain and/or insufficient

    Directory of Open Access Journals (Sweden)

    Picó Jesús

    2007-10-01

    Full Text Available Abstract Background An indirect approach is usually used to estimate the metabolic fluxes of an organism: couple the available measurements with known biological constraints (e.g. stoichiometry. Typically this estimation is done under a static point of view. Therefore, the fluxes so obtained are only valid while the environmental conditions and the cell state remain stable. However, estimating the evolution over time of the metabolic fluxes is valuable to investigate the dynamic behaviour of an organism and also to monitor industrial processes. Although Metabolic Flux Analysis can be successively applied with this aim, this approach has two drawbacks: i sometimes it cannot be used because there is a lack of measurable fluxes, and ii the uncertainty of experimental measurements cannot be considered. The Flux Balance Analysis could be used instead, but the assumption of optimal behaviour of the organism brings other difficulties. Results We propose a procedure to estimate the evolution of the metabolic fluxes that is structured as follows: 1 measure the concentrations of extracellular species and biomass, 2 convert this data to measured fluxes and 3 estimate the non-measured fluxes using the Flux Spectrum Approach, a variant of Metabolic Flux Analysis that overcomes the difficulties mentioned above without assuming optimal behaviour. We apply the procedure to a real problem taken from the literature: estimate the metabolic fluxes during a cultivation of CHO cells in batch mode. We show that it provides a reliable and rich estimation of the non-measured fluxes, thanks to considering measurements uncertainty and reversibility constraints. We also demonstrate that this procedure can estimate the non-measured fluxes even when there is a lack of measurable species. In addition, it offers a new method to deal with inconsistency. Conclusion This work introduces a procedure to estimate time-varying metabolic fluxes that copes with the insufficiency of

  18. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    Rhee, B.W.; Sim, B.S.; Lim, I.C.; Oh, S.K.

    1990-01-01

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  19. Power system low frequency oscillation mode estimation using wide area measurement systems

    Directory of Open Access Journals (Sweden)

    Papia Ray

    2017-04-01

    Full Text Available Oscillations in power systems are triggered by a wide variety of events. The system damps most of the oscillations, but a few undamped oscillations may remain which may lead to system collapse. Therefore low frequency oscillations inspection is necessary in the context of recent power system operation and control. Ringdown portion of the signal provides rich information of the low frequency oscillatory modes which has been taken into analysis. This paper provides a practical case study in which seven signal processing based techniques i.e. Prony Analysis (PA, Fast Fourier Transform (FFT, S-Transform (ST, Wigner-Ville Distribution (WVD, Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT, Hilbert-Huang Transform (HHT and Matrix Pencil Method (MPM were presented for estimating the low frequency modes in a given ringdown signal. Preprocessing of the signal is done by detrending. The application of the signal processing techniques is illustrated using actual wide area measurement systems (WAMS data collected from four different Phasor Measurement Unit (PMU i.e. Dadri, Vindyachal, Kanpur and Moga which are located near the recent disturbance event at the Northern Grid of India. Simulation results show that the seven signal processing technique (FFT, PA, ST, WVD, ESPRIT, HHT and MPM estimates two common oscillatory frequency modes (0.2, 0.5 from the raw signal. Thus, these seven techniques provide satisfactory performance in determining small frequency modes of the signal without losing its valuable property. Also a comparative study of the seven signal processing techniques has been carried out in order to find the best one. It was found that FFT and ESPRIT gives exact frequency modes as compared to other techniques, so they are recommended for estimation of low frequency modes. Further investigations were also carried out to estimate low frequency oscillatory mode with another case study of Eastern Interconnect Phasor Project

  20. Software project estimation the fundamentals for providing high quality information to decision makers

    CERN Document Server

    Abran, Alain

    2015-01-01

    Software projects are often late and over-budget and this leads to major problems for software customers. Clearly, there is a serious issue in estimating a realistic, software project budget. Furthermore, generic estimation models cannot be trusted to provide credible estimates for projects as complex as software projects. This book presents a number of examples using data collected over the years from various organizations building software. It also presents an overview of the non-for-profit organization, which collects data on software projects, the International Software Benchmarking Stan

  1. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    Science.gov (United States)

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  2. Load estimation from planar PIV measurement in vortex dominated flows

    Science.gov (United States)

    McClure, Jeffrey; Yarusevych, Serhiy

    2017-11-01

    Control volume-based loading estimates are employed on experimental and synthetic numerical planar Particle Image Velocimetry (PIV) data of a stationary cylinder and a cylinder undergoing one degree-of-freedom (1DOF) Vortex Induced Vibration (VIV). The results reveal the necessity of including out of plane terms, identified from a general formulation of the control volume momentum balance, when evaluating loads from planar measurements in three-dimensional flows. Reynolds stresses from out of plane fluctuations are shown to be significant for both instantaneous and mean force estimates when the control volume encompasses vortex dominated regions. For planar measurement, invoking a divergence-free assumption allows accurate estimation of half the identified terms. Towards evaluating the fidelity of PIV-based loading estimates for obtaining the forcing function unobtrusively in VIV experiments, the accuracy of the control volume-based loading methodology is evaluated using the numerical data with synthetically generated experimental PIV error, and a comparison is made between experimental PIV-based estimates and simultaneous force balance measurements.

  3. Estimates of economic burden of providing inpatient care in childhood rotavirus gastroenteritis from Malaysia.

    Science.gov (United States)

    Lee, Way Seah; Poo, Muhammad Izzuddin; Nagaraj, Shyamala

    2007-12-01

    To estimate the cost of an episode of inpatient care and the economic burden of hospitalisation for childhood rotavirus gastroenteritis (GE) in Malaysia. A 12-month prospective, hospital-based study on children less than 14 years of age with rotavirus GE, admitted to University of Malaya Medical Centre, Kuala Lumpur, was conducted in 2002. Data on human resource expenditure, costs of investigations, treatment and consumables were collected. Published estimates on rotavirus disease incidence in Malaysia were searched. Economic burden of hospital care for rotavirus GE in Malaysia was estimated by multiplying the cost of each episode of hospital admission for rotavirus GE with national rotavirus incidence in Malaysia. In 2002, the per capita health expenditure by Malaysian Government was US$71.47. Rotavirus was positive in 85 (22%) of the 393 patients with acute GE admitted during the study period. The median cost of providing inpatient care for an episode of rotavirus GE was US$211.91 (range US$68.50-880.60). The estimated average cases of children hospitalised for rotavirus GE in Malaysia (1999-2000) was 8571 annually. The financial burden of providing inpatient care for rotavirus GE in Malaysian children was estimated to be US$1.8 million (range US$0.6 million-7.5 million) annually. The cost of providing inpatient care for childhood rotavirus GE in Malaysia was estimated to be US$1.8 million annually. The financial burden of rotavirus disease would be higher if cost of outpatient visits, non-medical and societal costs are included.

  4. Is it feasible to estimate radiosonde biases from interlaced measurements?

    Science.gov (United States)

    Kremser, Stefanie; Tradowsky, Jordis S.; Rust, Henning W.; Bodeker, Greg E.

    2018-05-01

    Upper-air measurements of essential climate variables (ECVs), such as temperature, are crucial for climate monitoring and climate change detection. Because of the internal variability of the climate system, many decades of measurements are typically required to robustly detect any trend in the climate data record. It is imperative for the records to be temporally homogeneous over many decades to confidently estimate any trend. Historically, records of upper-air measurements were primarily made for short-term weather forecasts and as such are seldom suitable for studying long-term climate change as they lack the required continuity and homogeneity. Recognizing this, the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) has been established to provide reference-quality measurements of climate variables, such as temperature, pressure, and humidity, together with well-characterized and traceable estimates of the measurement uncertainty. To ensure that GRUAN data products are suitable to detect climate change, a scientifically robust instrument replacement strategy must always be adopted whenever there is a change in instrumentation. By fully characterizing any systematic differences between the old and new measurement system a temporally homogeneous data series can be created. One strategy is to operate both the old and new instruments in tandem for some overlap period to characterize any inter-instrument biases. However, this strategy can be prohibitively expensive at measurement sites operated by national weather services or research institutes. An alternative strategy that has been proposed is to alternate between the old and new instruments, so-called interlacing, and then statistically derive the systematic biases between the two instruments. Here we investigate the feasibility of such an approach specifically for radiosondes, i.e. flying the old and new instruments on alternating days. Synthetic data sets are used to explore the

  5. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2012-03-01

    Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.

  6. New measure of insulin sensitivity predicts cardiovascular disease better than HOMA estimated insulin resistance.

    Science.gov (United States)

    Venkataraman, Kavita; Khoo, Chin Meng; Leow, Melvin K S; Khoo, Eric Y H; Isaac, Anburaj V; Zagorodnov, Vitali; Sadananthan, Suresh A; Velan, Sendhil S; Chong, Yap Seng; Gluckman, Peter; Lee, Jeannette; Salim, Agus; Tai, E Shyong; Lee, Yung Seng

    2013-01-01

    Accurate assessment of insulin sensitivity may better identify individuals at increased risk of cardio-metabolic diseases. To examine whether a combination of anthropometric, biochemical and imaging measures can better estimate insulin sensitivity index (ISI) and provide improved prediction of cardio-metabolic risk, in comparison to HOMA-IR. Healthy male volunteers (96 Chinese, 80 Malay, 77 Indian), 21 to 40 years, body mass index 18-30 kg/m(2). Predicted ISI (ISI-cal) was generated using 45 randomly selected Chinese through stepwise multiple linear regression, and validated in the rest using non-parametric correlation (Kendall's tau τ). In an independent longitudinal cohort, ISI-cal and HOMA-IR were compared for prediction of diabetes and cardiovascular disease (CVD), using ROC curves. The study was conducted in a university academic medical centre. ISI measured by hyperinsulinemic euglycemic glucose clamp, along with anthropometric measurements, biochemical assessment and imaging; incident diabetes and CVD. A combination of fasting insulin, serum triglycerides and waist-to-hip ratio (WHR) provided the best estimate of clamp-derived ISI (adjusted R(2) 0.58 versus 0.32 HOMA-IR). In an independent cohort, ROC areas under the curve were 0.77±0.02 ISI-cal versus 0.76±0.02 HOMA-IR (p>0.05) for incident diabetes, and 0.74±0.03 ISI-cal versus 0.61±0.03 HOMA-IR (pHOMA-IR. This may be useful for estimating insulin sensitivity and cardio-metabolic risk in clinical and epidemiological settings.

  7. Epithelium percentage estimation facilitates epithelial quantitative protein measurement in tissue specimens.

    Science.gov (United States)

    Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui

    2013-12-01

    The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results

  8. Estimation of the measurement error of eccentrically installed orifice plates

    Energy Technology Data Exchange (ETDEWEB)

    Barton, Neil; Hodgkinson, Edwin; Reader-Harris, Michael

    2005-07-01

    The presentation discusses methods for simulation and estimation of flow measurement errors. The main conclusions are: Computational Fluid Dynamics (CFD) simulation methods and published test measurements have been used to estimate the error of a metering system over a period when its orifice plates were eccentric and when leaking O-rings allowed some gas to bypass the meter. It was found that plate eccentricity effects would result in errors of between -2% and -3% for individual meters. Validation against test data suggests that these estimates of error should be within 1% of the actual error, but it is unclear whether the simulations over-estimate or under-estimate the error. Simulations were also run to assess how leakage at the periphery affects the metering error. Various alternative leakage scenarios were modelled and it was found that the leakage rate has an effect on the error, but that the leakage distribution does not. Correction factors, based on the CFD results, were then used to predict the system's mis-measurement over a three-year period (tk)

  9. Study on Posture Estimation Using Delayed Measurements for Mobile Robots

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    When associating data from various sensors to estimate the posture of mobile robots, a crucial problem to be solved is that there may be some delayed measurements. Furthermore, the general multi-sensor data fusion algorithm is a Kalman filter. In order to handle the problem concerning delayed measurements, this paper investigates a Kalman filter modified to account for the delays. Based on the interpolating measurement, a fusion system is applied to estimate the posture of a mobile robot which fuses the data from the encoder and laser global position system using the extended Kalman filter algorithm. Finally, the posture estimation experiment of the mobile robot is given whose result verifies the feasibility and efficiency of the algorithm.

  10. Information-geometric measures estimate neural interactions during oscillatory brain states

    Directory of Open Access Journals (Sweden)

    Yimin eNie

    2014-02-01

    Full Text Available The characterization of functional network structures among multiple neurons is essential to understanding neural information processing. Information geometry (IG, a theory developed for investigating a space of probability distributions has recently been applied to spike-train analysis and has provided robust estimations of neural interactions. Although neural firing in the equilibrium state is often assumed in these studies, in reality, neural activity is non-stationary. The brain exhibits various oscillations depending on cognitive demands or when an animal is asleep. Therefore, the investigation of the IG measures during oscillatory network states is important for testing how the IG method can be applied to real neural data. Using model networks of binary neurons or more realistic spiking neurons, we studied how the single- and pairwise-IG measures were influenced by oscillatory neural activity. Two general oscillatory mechanisms, externally driven oscillations and internally induced oscillations, were considered. In both mechanisms, we found that the single-IG measure was linearly related to the magnitude of the external input, and that the pairwise-IG measure was linearly related to the sum of connection strengths between two neurons. We also observed that the pairwise-IG measure was not dependent on the oscillation frequency. These results are consistent with the previous findings that were obtained under the equilibrium conditions. Therefore, we demonstrate that the IG method provides useful insights into neural interactions under the oscillatory condition that can often be observed in the real brain.

  11. Application of a virtual coordinate measuring machine for measurement uncertainty estimation of aspherical lens parameters

    International Nuclear Information System (INIS)

    Küng, Alain; Meli, Felix; Nicolet, Anaïs; Thalmann, Rudolf

    2014-01-01

    Tactile ultra-precise coordinate measuring machines (CMMs) are very attractive for accurately measuring optical components with high slopes, such as aspheres. The METAS µ-CMM, which exhibits a single point measurement repeatability of a few nanometres, is routinely used for measurement services of microparts, including optical lenses. However, estimating the measurement uncertainty is very demanding. Because of the many combined influencing factors, an analytic determination of the uncertainty of parameters that are obtained by numerical fitting of the measured surface points is almost impossible. The application of numerical simulation (Monte Carlo methods) using a parametric fitting algorithm coupled with a virtual CMM based on a realistic model of the machine errors offers an ideal solution to this complex problem: to each measurement data point, a simulated measurement variation calculated from the numerical model of the METAS µ-CMM is added. Repeated several hundred times, these virtual measurements deliver the statistical data for calculating the probability density function, and thus the measurement uncertainty for each parameter. Additionally, the eventual cross-correlation between parameters can be analyzed. This method can be applied for the calibration and uncertainty estimation of any parameter of the equation representing a geometric element. In this article, we present the numerical simulation model of the METAS µ-CMM and the application of a Monte Carlo method for the uncertainty estimation of measured asphere parameters. (paper)

  12. Wind Speed Preview Measurement and Estimation for Feedforward Control of Wind Turbines

    Science.gov (United States)

    Simley, Eric J.

    Wind turbines typically rely on feedback controllers to maximize power capture in below-rated conditions and regulate rotor speed during above-rated operation. However, measurements of the approaching wind provided by Light Detection and Ranging (lidar) can be used as part of a preview-based, or feedforward, control system in order to improve rotor speed regulation and reduce structural loads. But the effectiveness of preview-based control depends on how accurately lidar can measure the wind that will interact with the turbine. In this thesis, lidar measurement error is determined using a statistical frequency-domain wind field model including wind evolution, or the change in turbulent wind speeds between the time they are measured and when they reach the turbine. Parameters of the National Renewable Energy Laboratory (NREL) 5-MW reference turbine model are used to determine measurement error for a hub-mounted circularly-scanning lidar scenario, based on commercially-available technology, designed to estimate rotor effective uniform and shear wind speed components. By combining the wind field model, lidar model, and turbine parameters, the optimal lidar scan radius and preview distance that yield the minimum mean square measurement error, as well as the resulting minimum achievable error, are found for a variety of wind conditions. With optimized scan scenarios, it is found that relatively low measurement error can be achieved, but the attainable measurement error largely depends on the wind conditions. In addition, the impact of the induction zone, the region upstream of the turbine where the approaching wind speeds are reduced, as well as turbine yaw error on measurement quality is analyzed. In order to minimize the mean square measurement error, an optimal measurement prefilter is employed, which depends on statistics of the correlation between the preview measurements and the wind that interacts with the turbine. However, because the wind speeds encountered by

  13. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    Science.gov (United States)

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  14. A Study on Parametric Wave Estimation Based on Measured Ship Motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Iseki, Toshio

    2011-01-01

    The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics of the param......The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics...... of the parametric model are discussed by considering the results of a similar estimation concept based on Bayesian modelling. The purpose of the latter comparison is not to favour the one estimation approach to the other but rather to highlight some of the advantages and disadvantages of the two approaches....

  15. Oscillation estimates relative to p-homogeneous forms and Kato measures data

    Directory of Open Access Journals (Sweden)

    Marco Biroli

    2006-11-01

    Full Text Available We state pointwise estimate for the positive subsolutions associated to a p-homogeneous form and nonnegative Radon measures data. As a by-product we establish an oscillation’s estimate for the solutions relative to Kato measures data.

  16. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...

  17. Codon Deviation Coefficient: A novel measure for estimating codon usage bias and its statistical significance

    KAUST Repository

    Zhang, Zhang

    2012-03-22

    Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.

  18. BIM – New rules of measurement ontology for construction cost estimation

    Directory of Open Access Journals (Sweden)

    F.H. Abanda

    2017-04-01

    Full Text Available For generations, the process of cost estimation has been manual, time-consuming and error-prone. Emerging Building Information Modelling (BIM can exploit standard measurement methods to automate cost estimation process and improve inaccuracies. Structuring standard measurement methods in an ontologically and machine readable format for a BIM software can greatly facilitate the process of improving inaccuracies in cost estimation. This study explores the development of an ontology based on New Rules of Measurement (NRM for cost estimation during the tendering stages. The methodology adopted is methontology, one of the most widely used ontology engineering methodologies. To ensure the ontology is fit for purpose, cost estimation experts are employed to check the semantics, descriptive logic-based reasoners are used to syntactically check the ontology and a leading 4D BIM modelling software is used on a case study building to test/validate the proposed ontology.

  19. The estimation of the measurement results with using statistical methods

    International Nuclear Information System (INIS)

    Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T

    2015-01-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed

  20. The estimation of the measurement results with using statistical methods

    Science.gov (United States)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  1. Headphone-To-Ear Transfer Function Estimation Using Measured Acoustic Parameters

    Directory of Open Access Journals (Sweden)

    Jinlin Liu

    2018-06-01

    Full Text Available This paper proposes to use an optimal five-microphone array method to measure the headphone acoustic reflectance and equivalent sound sources needed in the estimation of headphone-to-ear transfer functions (HpTFs. The performance of this method is theoretically analyzed and experimentally investigated. With the measured acoustic parameters HpTFs for different headphones and ear canal area functions are estimated based on a computational acoustic model. The estimation results show that HpTFs vary considerably with headphones and ear canals, which suggests that individualized compensations for HpTFs are necessary for headphones to reproduce desired sounds for different listeners.

  2. Measuring physical inactivity: do current measures provide an accurate view of "sedentary" video game time?

    Science.gov (United States)

    Fullerton, Simon; Taylor, Anne W; Dal Grande, Eleonora; Berry, Narelle

    2014-01-01

    Measures of screen time are often used to assess sedentary behaviour. Participation in activity-based video games (exergames) can contribute to estimates of screen time, as current practices of measuring it do not consider the growing evidence that playing exergames can provide light to moderate levels of physical activity. This study aimed to determine what proportion of time spent playing video games was actually spent playing exergames. Data were collected via a cross-sectional telephone survey in South Australia. Participants aged 18 years and above (n = 2026) were asked about their video game habits, as well as demographic and socioeconomic factors. In cases where children were in the household, the video game habits of a randomly selected child were also questioned. Overall, 31.3% of adults and 79.9% of children spend at least some time playing video games. Of these, 24.1% of adults and 42.1% of children play exergames, with these types of games accounting for a third of all time that adults spend playing video games and nearly 20% of children's video game time. A substantial proportion of time that would usually be classified as "sedentary" may actually be spent participating in light to moderate physical activity.

  3. Psychological impact of providing women with personalised 10-year breast cancer risk estimates.

    Science.gov (United States)

    French, David P; Southworth, Jake; Howell, Anthony; Harvie, Michelle; Stavrinos, Paula; Watterson, Donna; Sampson, Sarah; Evans, D Gareth; Donnelly, Louise S

    2018-05-08

    The Predicting Risk of Cancer at Screening (PROCAS) study estimated 10-year breast cancer risk for 53,596 women attending NHS Breast Screening Programme. The present study, nested within the PROCAS study, aimed to assess the psychological impact of receiving breast cancer risk estimates, based on: (a) the Tyrer-Cuzick (T-C) algorithm including breast density or (b) T-C including breast density plus single-nucleotide polymorphisms (SNPs), versus (c) comparison women awaiting results. A sample of 2138 women from the PROCAS study was stratified by testing groups: T-C only, T-C(+SNPs) and comparison women; and by 10-year risk estimates received: 'moderate' (5-7.99%), 'average' (2-4.99%) or 'below average' (<1.99%) risk. Postal questionnaires were returned by 765 (36%) women. Overall state anxiety and cancer worry were low, and similar for women in T-C only and T-C(+SNPs) groups. Women in both T-C only and T-C(+SNPs) groups showed lower-state anxiety but slightly higher cancer worry than comparison women awaiting results. Risk information had no consistent effects on intentions to change behaviour. Most women were satisfied with information provided. There was considerable variation in understanding. No major harms of providing women with 10-year breast cancer risk estimates were detected. Research to establish the feasibility of risk-stratified breast screening is warranted.

  4. Estimate of rain evaporation rates from dual-wavelength lidar measurements: comparison against a model analytical solution

    Science.gov (United States)

    Lolli, Simone; Di Girolamo, Paolo; Demoz, Belay; Li, Xiaowen; Welton, Ellsworth J.

    2018-04-01

    Rain evaporation significantly contributes to moisture and heat cloud budgets. In this paper, we illustrate an approach to estimate the median volume raindrop diameter and the rain evaporation rate profiles from dual-wavelength lidar measurements. These observational results are compared with those provided by a model analytical solution. We made use of measurements from the multi-wavelength Raman lidar BASIL.

  5. The estimation of uncertainty of radioactivity measurement on gamma counters in radiopharmacy

    International Nuclear Information System (INIS)

    Jovanovic, M.S.; Orlic, M.; Vranjes, S.; Stamenkovic, Lj. . E-mail address of corresponding author: nikijov@vin.bg.ac.yu; Jovanovic, M.S.)

    2005-01-01

    In this paper the estimation of uncertainty of measurement of radioactivity on gamma counter in Laboratory for radioisotopes is presented. The uncertainty components, which are important for these measurements, are identified and taken into account while estimating the uncertainty of measurement.(author)

  6. Smile line assessment comparing quantitative measurement and visual estimation

    NARCIS (Netherlands)

    Geld, P. Van der; Oosterveld, P.; Schols, J.; Kuijpers-Jagtman, A.M.

    2011-01-01

    INTRODUCTION: Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation

  7. Estimation of waves and ship responses using onboard measurements

    DEFF Research Database (Denmark)

    Montazeri, Najmeh

    This thesis focuses on estimation of waves and ship responses using ship-board measurements. This is useful for development of operational safety and performance efficiency in connection with the broader concept of onboard decision support systems. Estimation of sea state is studied using a set...... of measured ship responses, a parametric description of directional wave spectra (a generalised JONSWAP model) and the transfer functions of the ship responses. The difference between the spectral moments of the measured ship responses and the corresponding theoretically calculated moments formulates a cost...... information. The model is tested on simulated data based on known unimodal and bimodal wave scenarios. The wave parameters in the output are then compared with the true wave parameters. In addition to the numerical experiments, two sets of full-scale measurements from container ships are analysed. Herein...

  8. Validating the use of 137Cs and 210Pbex measurements to estimate rates of soil loss from cultivated land in southern Italy.

    Science.gov (United States)

    Porto, Paolo; Walling, Des E

    2012-04-01

    Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by (137)Cs and (210)Pb(ex) measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by (137)Cs and (210)Pb(ex) measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of (137)Cs and (210)Pb(ex) measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil

  9. Estimation of uncertainty in tracer gas measurement of air change rates.

    Science.gov (United States)

    Iizuka, Atsushi; Okuizumi, Yumiko; Yanagisawa, Yukio

    2010-12-01

    Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of air change rate can be avoided. The proposed estimation method will be useful in practical ventilation measurements.

  10. Uncertainty estimation of shape and roughness measurement

    NARCIS (Netherlands)

    Morel, M.A.A.

    2006-01-01

    One of the most common techniques to measure a surface or form is mechanical probing. Although used since the early 30s of the 20th century, a method to calculate a task specific uncertainty budget was not yet devised. Guidelines and statistical estimates are common in certain cases but an

  11. Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters

    Science.gov (United States)

    Shi, L.

    2015-12-01

    This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  12. Estimates of Leaf Relative Water Content from Optical Polarization Measurements

    Science.gov (United States)

    Dahlgren, R. P.; Vanderbilt, V. C.; Daughtry, C. S. T.

    2017-12-01

    Remotely sensing the water status of plant canopies remains a long term goal of remote sensing research. Existing approaches to remotely sensing canopy water status, such as the Crop Water Stress Index (CWSI) and the Equivalent Water Thickness (EWT), have limitations. The CWSI, based upon remotely sensing canopy radiant temperature in the thermal infrared spectral region, does not work well in humid regions, requires estimates of the vapor pressure deficit near the canopy during the remote sensing over-flight and, once stomata close, provides little information regarding the canopy water status. The EWT is based upon the physics of water-light interaction in the 900-2000nm spectral region, not plant physiology. Our goal, development of a remote sensing technique for estimating plant water status based upon measurements in the VIS/NIR spectral region, would potentially provide remote sensing access to plant dehydration physiology - to the cellular photochemistry and structural changes associated with water deficits in leaves. In this research, we used optical, crossed polarization filters to measure the VIS/NIR light reflected from the leaf interior, R, as well as the leaf transmittance, T, for 78 corn (Zea mays) and soybean (Glycine max) leaves having relative water contents (RWC) between 0.60 and 0.98. Our results show that as RWC decreases R increases while T decreases. Our results tie R and T changes in the VIS/NIR to leaf physiological changes - linking the light scattered out of the drying leaf interior to its relative water content and to changes in leaf cellular structure and pigments. Our results suggest remotely sensing the physiological water status of a single leaf - and perhaps of a plant canopy - might be possible in the future.

  13. Estimation of incidences of infectious diseases based on antibody measurements

    DEFF Research Database (Denmark)

    Simonsen, J; Mølbak, K; Falkenhorst, G

    2009-01-01

    bacterial infections. This study presents a Bayesian approach for obtaining incidence estimates by use of measurements of serum antibodies against Salmonella from a cross-sectional study. By comparing these measurements with antibody measurements from a follow-up study of infected individuals...

  14. Simultaneous spacecraft orbit estimation and control based on GPS measurements via extended Kalman filter

    Directory of Open Access Journals (Sweden)

    Tamer Mekky Ahmed Habib

    2013-06-01

    Full Text Available The primary aim of this work is to provide simultaneous spacecraft orbit estimation and control based on the global positioning system (GPS measurements suitable for application to the next coming Egyptian remote sensing satellites. Disturbance resulting from earth’s oblateness till the fourth order (i.e., J4 is considered. In addition, aerodynamic drag and random disturbance effects are taken into consideration.

  15. Interlaboratory analytical performance studies; a way to estimate measurement uncertainty

    Directory of Open Access Journals (Sweden)

    El¿bieta £ysiak-Pastuszak

    2004-09-01

    Full Text Available Comparability of data collected within collaborative programmes became the key challenge of analytical chemistry in the 1990s, including monitoring of the marine environment. To obtain relevant and reliable data, the analytical process has to proceed under a well-established Quality Assurance (QA system with external analytical proficiency tests as an inherent component. A programme called Quality Assurance in Marine Monitoring in Europe (QUASIMEME was established in 1993 and evolved over the years as the major provider of QA proficiency tests for nutrients, trace metals and chlorinated organic compounds in marine environment studies. The article presents an evaluation of results obtained in QUASIMEME Laboratory Performance Studies by the monitoring laboratory of the Institute of Meteorology and Water Management (Gdynia, Poland in exercises on nutrient determination in seawater. The measurement uncertainty estimated from routine internal quality control measurements and from results of analytical performance exercises is also presented in the paper.

  16. Principal Curvature Measures Estimation and Application to 3D Face Recognition

    KAUST Repository

    Tang, Yinhang

    2017-04-06

    This paper presents an effective 3D face keypoint detection, description and matching framework based on three principle curvature measures. These measures give a unified definition of principle curvatures for both smooth and discrete surfaces. They can be reasonably computed based on the normal cycle theory and the geometric measure theory. The strong theoretical basis of these measures provides us a solid discrete estimation method on real 3D face scans represented as triangle meshes. Based on these estimated measures, the proposed method can automatically detect a set of sparse and discriminating 3D facial feature points. The local facial shape around each 3D feature point is comprehensively described by histograms of these principal curvature measures. To guarantee the pose invariance of these descriptors, three principle curvature vectors of these principle curvature measures are employed to assign the canonical directions. Similarity comparison between faces is accomplished by matching all these curvature-based local shape descriptors using the sparse representation-based reconstruction method. The proposed method was evaluated on three public databases, i.e. FRGC v2.0, Bosphorus, and Gavab. Experimental results demonstrated that the three principle curvature measures contain strong complementarity for 3D facial shape description, and their fusion can largely improve the recognition performance. Our approach achieves rank-one recognition rates of 99.6, 95.7, and 97.9% on the neutral subset, expression subset, and the whole FRGC v2.0 databases, respectively. This indicates that our method is robust to moderate facial expression variations. Moreover, it also achieves very competitive performance on the pose subset (over 98.6% except Yaw 90°) and the occlusion subset (98.4%) of the Bosphorus database. Even in the case of extreme pose variations like profiles, it also significantly outperforms the state-of-the-art approaches with a recognition rate of 57.1%. The

  17. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    Science.gov (United States)

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  18. Measuring Physical Inactivity: Do Current Measures Provide an Accurate View of “Sedentary” Video Game Time?

    Directory of Open Access Journals (Sweden)

    Simon Fullerton

    2014-01-01

    Full Text Available Background. Measures of screen time are often used to assess sedentary behaviour. Participation in activity-based video games (exergames can contribute to estimates of screen time, as current practices of measuring it do not consider the growing evidence that playing exergames can provide light to moderate levels of physical activity. This study aimed to determine what proportion of time spent playing video games was actually spent playing exergames. Methods. Data were collected via a cross-sectional telephone survey in South Australia. Participants aged 18 years and above (n=2026 were asked about their video game habits, as well as demographic and socioeconomic factors. In cases where children were in the household, the video game habits of a randomly selected child were also questioned. Results. Overall, 31.3% of adults and 79.9% of children spend at least some time playing video games. Of these, 24.1% of adults and 42.1% of children play exergames, with these types of games accounting for a third of all time that adults spend playing video games and nearly 20% of children’s video game time. Conclusions. A substantial proportion of time that would usually be classified as “sedentary” may actually be spent participating in light to moderate physical activity.

  19. Resting Energy Expenditure in Anorexia Nervosa: Measured versus Estimated

    Directory of Open Access Journals (Sweden)

    Marwan El Ghoch

    2012-01-01

    Full Text Available Introduction. Aim of this study was to compare the resting energy expenditure (REE measured by the Douglas bag method with the REE estimated with the FitMate method, the Harris-Benedict equation, and the Müller et al. equation for individuals with BMI < 18.5 kg/m2 in a severe group of underweight patients with anorexia nervosa (AN. Methods. 15 subjects with AN participated in the study. The Douglas bag method and the FitMate method were used to measure REE and the dual energy X-ray absorptiometry to assess body composition after one day of refeeding. Results. FitMate method and the Müller et al. equation gave an accurate REE estimation, while the Harris-Benedict equation overestimated the REE when compared with the Douglas bag method. Conclusion. The data support the use of the FitMate method and the Müller et al. equation, but not the Harris-Benedict equation, to estimate REE in AN patients after short-term refeeding.

  20. Estimating Turbulence Statistics and Parameters from Lidar Measurements. Remote Sensing Summer School

    DEFF Research Database (Denmark)

    Sathe, Ameya

    This report is prepared as a written contribution to the Remote Sensing Summer School, that is organized by the Department of Wind Energy, Technical University of Denmark. It provides an overview of the state-of-the-art with regards to estimating turbulence statistics from lidar measurements...... configuration. The so-called velocity Azimuth Display (VAD) and the Doppler Beam Swinging (DBS) methods of post processing the lidar data are investigated in greater details, partly due to their wide use in commercial lidars. It is demonstrated that the VAD or DBS techniques result in introducing significant...

  1. Can genetic estimators provide robust estimates of the effective number of breeders in small populations?

    Directory of Open Access Journals (Sweden)

    Marion Hoehn

    Full Text Available The effective population size (N(e is proportional to the loss of genetic diversity and the rate of inbreeding, and its accurate estimation is crucial for the monitoring of small populations. Here, we integrate temporal studies of the gecko Oedura reticulata, to compare genetic and demographic estimators of N(e. Because geckos have overlapping generations, our goal was to demographically estimate N(bI, the inbreeding effective number of breeders and to calculate the N(bI/N(a ratio (N(a =number of adults for four populations. Demographically estimated N(bI ranged from 1 to 65 individuals. The mean reduction in the effective number of breeders relative to census size (N(bI/N(a was 0.1 to 1.1. We identified the variance in reproductive success as the most important variable contributing to reduction of this ratio. We used four methods to estimate the genetic based inbreeding effective number of breeders N(bI(gen and the variance effective populations size N(eV(gen estimates from the genotype data. Two of these methods - a temporal moment-based (MBT and a likelihood-based approach (TM3 require at least two samples in time, while the other two were single-sample estimators - the linkage disequilibrium method with bias correction LDNe and the program ONeSAMP. The genetic based estimates were fairly similar across methods and also similar to the demographic estimates excluding those estimates, in which upper confidence interval boundaries were uninformative. For example, LDNe and ONeSAMP estimates ranged from 14-55 and 24-48 individuals, respectively. However, temporal methods suffered from a large variation in confidence intervals and concerns about the prior information. We conclude that the single-sample estimators are an acceptable short-cut to estimate N(bI for species such as geckos and will be of great importance for the monitoring of species in fragmented landscapes.

  2. Measurement and estimation of dew point for SNG. [Comparison of calculated and measured values

    Energy Technology Data Exchange (ETDEWEB)

    Furuyama, Y.

    1974-08-01

    Toho Gas measured and estimated SNG dew points in high-pressure deliveries by calculating the theoretical values by the high-pressure gas-liquid equilibrium theory using the pressure-extrapolation method to reach K = 1, and the BWR method to estimate fugacity, then verifying these values experimentally. The experimental values were measured at 161.7 to 367.5 psi using the conventional static and circulation methods, in addition to a newly developed method consisting of circulating a known composition of gas mixtures, partially freezing them, and monitoring the dew point by observing the droplets on a mirror cooled by blowing liquid nitrogen. Good agreement was found between the calculated and the experimental values.

  3. Demonstrating Heisenberg-limited unambiguous phase estimation without adaptive measurements

    International Nuclear Information System (INIS)

    Higgins, B L; Wiseman, H M; Pryde, G J; Berry, D W; Bartlett, S D; Mitchell, M W

    2009-01-01

    We derive, and experimentally demonstrate, an interferometric scheme for unambiguous phase estimation with precision scaling at the Heisenberg limit that does not require adaptive measurements. That is, with no prior knowledge of the phase, we can obtain an estimate of the phase with a standard deviation that is only a small constant factor larger than the minimum physically allowed value. Our scheme resolves the phase ambiguity that exists when multiple passes through a phase shift, or NOON states, are used to obtain improved phase resolution. Like a recently introduced adaptive technique (Higgins et al 2007 Nature 450 393), our experiment uses multiple applications of the phase shift on single photons. By not requiring adaptive measurements, but rather using a predetermined measurement sequence, the present scheme is both conceptually simpler and significantly easier to implement. Additionally, we demonstrate a simplified adaptive scheme that also surpasses the standard quantum limit for single passes.

  4. Beak measurements of octopus ( Octopus variabilis) in Jiaozhou Bay and their use in size and biomass estimation

    Science.gov (United States)

    Xue, Ying; Ren, Yiping; Meng, Wenrong; Li, Long; Mao, Xia; Han, Dongyan; Ma, Qiuyun

    2013-09-01

    Cephalopods play key roles in global marine ecosystems as both predators and preys. Regressive estimation of original size and weight of cephalopod from beak measurements is a powerful tool of interrogating the feeding ecology of predators at higher trophic levels. In this study, regressive relationships among beak measurements and body length and weight were determined for an octopus species ( Octopus variabilis), an important endemic cephalopod species in the northwest Pacific Ocean. A total of 193 individuals (63 males and 130 females) were collected at a monthly interval from Jiaozhou Bay, China. Regressive relationships among 6 beak measurements (upper hood length, UHL; upper crest length, UCL; lower hood length, LHL; lower crest length, LCL; and upper and lower beak weights) and mantle length (ML), total length (TL) and body weight (W) were determined. Results showed that the relationships between beak size and TL and beak size and ML were linearly regressive, while those between beak size and W fitted a power function model. LHL and UCL were the most useful measurements for estimating the size and biomass of O. variabilis. The relationships among beak measurements and body length (either ML or TL) were not significantly different between two sexes; while those among several beak measurements (UHL, LHL and LBW) and body weight (W) were sexually different. Since male individuals of this species have a slightly greater body weight distribution than female individuals, the body weight was not an appropriate measurement for estimating size and biomass, especially when the sex of individuals in the stomachs of predators was unknown. These relationships provided essential information for future use in size and biomass estimation of O. variabilis, as well as the estimation of predator/prey size ratios in the diet of top predators.

  5. Methane Emission Estimates from Landfills Obtained with Dynamic Plume Measurements

    International Nuclear Information System (INIS)

    Hensen, A.; Scharff, H.

    2001-01-01

    Methane emissions from 3 different landfills in the Netherlands were estimated using a mobile Tuneable Diode Laser system (TDL). The methane concentration in the cross section of the plume is measured downwind of the source on a transect perpendicular to the wind direction. A gaussian plume model was used to simulate the concentration levels at the transect. The emission from the source is calculated from the measured and modelled concentration levels.Calibration of the plume dispersion model is done using a tracer (N 2 O) that is released from the landfill and measured simultaneously with the TDL system. The emission estimates for the different locations ranged from 3.6 to 16 m 3 ha -1 hr -1 for the different sites. The emission levels were compared to emission estimates based on the landfill gas production models. This comparison suggests oxidation rates that are up to 50% in spring and negligible in November. At one of the three sites measurements were performed in campaigns in 3 consecutive years. Comparison of the emission levels in the first and second year showed a reduction of the methane emission of about 50% due to implementation of a gas extraction system. From the second to the third year emissions increased by a factor of 4 due to new land filling. Furthermore measurements were performed in winter when oxidation efficiency was reduced. This paper describes the measurement technique used, and discusses the results of the experimental sessions that were performed

  6. Accuracy of Standing-Tree Volume Estimates Based on McClure Mirror Caliper Measurements

    Science.gov (United States)

    Noel D. Cost

    1971-01-01

    The accuracy of standing-tree volume estimates, calculated from diameter measurements taken by a mirror caliper and with sectional aluminum poles for height control, was compared with volume estimates calculated from felled-tree measurements. Twenty-five trees which varied in species, size, and form were used in the test. The results showed that two estimates of total...

  7. Measurement and Estimation of Riverbed Scour in a Mountain River

    Science.gov (United States)

    Song, L. A.; Chan, H. C.; Chen, B. A.

    2016-12-01

    Mountains are steep with rapid flows in Taiwan. After installing a structure in a mountain river, scour usually occurs around the structure because of the high energy gradient. Excessive scouring has been reported as one of the main causes of failure of river structures. The scouring disaster related to the flood can be reduced if the riverbed variation can be properly evaluated based on the flow conditions. This study measures the riverbed scour by using an improved "float-out device". Scouring and hydrodynamic data were simultaneously collected in the Mei River, Nantou County located in central Taiwan. The semi-empirical models proposed by previous researchers were used to estimate the scour depths based on the measured flow characteristics. The differences between the measured and estimated scour depths were discussed. Attempts were then made to improve the estimating results by developing a semi-empirical model to predict the riverbed scour based on the local field data. It is expected to setup a warning system of river structure safety by using the flow conditions. Keywords: scour, model, float-out device

  8. Measuring Ucrit and endurance: equipment choice influences estimates of fish swimming performance.

    Science.gov (United States)

    Kern, P; Cramp, R L; Gordos, M A; Watson, J R; Franklin, C E

    2018-01-01

    This study compared the critical swimming speed (U crit ) and endurance performance of three Australian freshwater fish species in different swim-test apparatus. Estimates of U crit measured in a large recirculating flume were greater for all species compared with estimates from a smaller model of the same recirculating flume. Large differences were also observed for estimates of endurance swimming performance between these recirculating flumes and a free-surface swim tunnel. Differences in estimates of performance may be attributable to variation in flow conditions within different types of swim chambers. Variation in estimates of swimming performance between different types of flumes complicates the application of laboratory-based measures to the design of fish passage infrastructure. © 2017 The Fisheries Society of the British Isles.

  9. Estimation of stature using lower limb measurements in Sudanese Arabs.

    Science.gov (United States)

    Ahmed, Altayeb Abdalla

    2013-07-01

    The estimation of stature from body parts is one of the most vital parts of personal identification in medico-legal autopsies, especially when mutilated and amputated limbs or body parts are found. The aim of this study was to assess the reliability and accuracy of using lower limb measurements for stature estimations. The stature, tibial length, bimalleolar breadth, foot length and foot breadth of 160 right-handed Sudanese Arab subjects, 80 men and 80 women (25-30 years old), were measured. The reliability of measurement acquisition was tested prior to the primary data collection. The data were analysed using basic univariate analysis and linear and multiple regression analyses. The results showed acceptable standards of measurement errors and reliability. Sex differences were significant for all of the measurements. There was a positive correlation coefficient between lower-limb dimensions and stature (P-value Arabs. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  10. GUM approach to uncertainty estimations for online 220Rn concentration measurements using Lucas scintillation cell

    International Nuclear Information System (INIS)

    Sathyabama, N.

    2014-01-01

    It is now widely recognized that, when all of the known or suspected components of errors have been evaluated and corrected, there still remains an uncertainty, that is, a doubt about how well the result of the measurement represents the value of the quantity being measured. Evaluation of measurement data - Guide to the expression of Uncertainty in Measurement (GUM) is a guidance document, the purpose of which is to promote full information on how uncertainty statements are arrived at and to provide a basis for the international comparison of measurement results. In this paper, uncertainty estimations following GUM guidelines have been made for the measured values of online thoron concentrations using Lucas scintillation cell to prove that the correction for disequilibrium between 220 Rn and 216 Po is significant in online 220 Rn measurements

  11. Stability Analysis for Li-Ion Battery Model Parameters and State of Charge Estimation by Measurement Uncertainty Consideration

    Directory of Open Access Journals (Sweden)

    Shifei Yuan

    2015-07-01

    Full Text Available Accurate estimation of model parameters and state of charge (SoC is crucial for the lithium-ion battery management system (BMS. In this paper, the stability of the model parameters and SoC estimation under measurement uncertainty is evaluated by three different factors: (i sampling periods of 1/0.5/0.1 s; (ii current sensor precisions of ±5/±50/±500 mA; and (iii voltage sensor precisions of ±1/±2.5/±5 mV. Firstly, the numerical model stability analysis and parametric sensitivity analysis for battery model parameters are conducted under sampling frequency of 1–50 Hz. The perturbation analysis is theoretically performed of current/voltage measurement uncertainty on model parameter variation. Secondly, the impact of three different factors on the model parameters and SoC estimation was evaluated with the federal urban driving sequence (FUDS profile. The bias correction recursive least square (CRLS and adaptive extended Kalman filter (AEKF algorithm were adopted to estimate the model parameters and SoC jointly. Finally, the simulation results were compared and some insightful findings were concluded. For the given battery model and parameter estimation algorithm, the sampling period, and current/voltage sampling accuracy presented a non-negligible effect on the estimation results of model parameters. This research revealed the influence of the measurement uncertainty on the model parameter estimation, which will provide the guidelines to select a reasonable sampling period and the current/voltage sensor sampling precisions in engineering applications.

  12. Using ‘snapshot’ measurements of CH4 fluxes from an ombrotrophic peatland to estimate annual budgets: interpolation versus modelling

    Directory of Open Access Journals (Sweden)

    S.M. Green

    2017-03-01

    Full Text Available Flux-chamber measurements of greenhouse gas exchanges between the soil and the atmosphere represent a snapshot of the conditions on a particular site and need to be combined or used in some way to provide integrated fluxes for the longer time periods that are often of interest. In contrast to carbon dioxide (CO2, most studies that have estimated the time-integrated flux of CH4 on ombrotrophic peatlands have not used models. Typically, linear interpolation is used to estimate CH4 fluxes during the time periods between flux-chamber measurements. CH4 fluxes generally show a rise followed by a fall through the growing season that may be captured reasonably well by interpolation, provided there are sufficiently frequent measurements. However, day-to-day and week-to-week variability is also often evident in CH4 flux data, and will not necessarily be properly represented by interpolation. Using flux chamber data from a UK blanket peatland, we compared annualised CH4 fluxes estimated by interpolation with those estimated using linear models and found that the former tended to be higher than the latter. We consider the implications of these results for the calculation of the radiative forcing effect of ombrotrophic peatlands.

  13. Estimation of uncertainty bounds for individual particle image velocimetry measurements from cross-correlation peak ratio

    International Nuclear Information System (INIS)

    Charonko, John J; Vlachos, Pavlos P

    2013-01-01

    Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost. (paper)

  14. Triangular and Trapezoidal Fuzzy State Estimation with Uncertainty on Measurements

    Directory of Open Access Journals (Sweden)

    Mohammad Sadeghi Sarcheshmah

    2012-01-01

    Full Text Available In this paper, a new method for uncertainty analysis in fuzzy state estimation is proposed. The uncertainty is expressed in measurements. Uncertainties in measurements are modelled with different fuzzy membership functions (triangular and trapezoidal. To find the fuzzy distribution of any state variable, the problem is formulated as a constrained linear programming (LP optimization. The viability of the proposed method would be verified with the ones obtained from the weighted least squares (WLS and the fuzzy state estimation (FSE in the 6-bus system and in the IEEE-14 and 30 bus system.

  15. Do centimetres matter? Self-reported versus estimated height measurements in parents.

    Science.gov (United States)

    Gozzi, T; Flück, Ce; L'allemand, D; Dattani, M T; Hindmarsh, P C; Mullis, P E

    2010-04-01

    An impressive discrepancy between reported and measured parental height is often observed. The aims of this study were: (a) to assess whether there is a significant difference between the reported and measured parental height; (b) to focus on the reported and, thereafter, measured height of the partner; (c) to analyse its impact on the calculated target height range. A total of 1542 individual parents were enrolled. The parents were subdivided into three groups: normal height (3-97th Centile), short (97%) stature. Overall, compared with men, women were far better in estimating their own height (p Women of normal stature underestimated the short partner and overestimated the tall partner, whereas male partners of normal stature overestimated both their short as well as tall partners. Women of tall stature estimated the heights of their short partners correctly, whereas heights of normal statured men were underestimated. On the other hand, tall men overestimated the heights of their female partners who are of normal and short stature. Furthermore, women of short stature estimated the partners of normal stature adequately, and the heights of their tall partners were overestimated. Interestingly, the short men significantly underestimated the normal, but overestimated tall female partners. Only measured heights should be used to perform accurate evaluations of height, particularly when diagnostic tests or treatment interventions are contemplated. For clinical trails, we suggest that only quality measured parental heights are acceptable, as the errors incurred in estimates may enhance/conceal true treatment effects.

  16. A New Heteroskedastic Consistent Covariance Matrix Estimator using Deviance Measure

    Directory of Open Access Journals (Sweden)

    Nuzhat Aftab

    2016-06-01

    Full Text Available In this article we propose a new heteroskedastic consistent covariance matrix estimator, HC6, based on deviance measure. We have studied and compared the finite sample behavior of the new test and compared it with other this kind of estimators, HC1, HC3 and HC4m, which are used in case of leverage observations. Simulation study is conducted to study the effect of various levels of heteroskedasticity on the size and power of quasi-t test with HC estimators. Results show that the test statistic based on our new suggested estimator has better asymptotic approximation and less size distortion as compared to other estimators for small sample sizes when high level ofheteroskedasticity is present in data.

  17. A super-resolution approach for uncertainty estimation of PIV measurements

    NARCIS (Netherlands)

    Sciacchitano, A.; Wieneke, B.; Scarano, F.

    2012-01-01

    A super-resolution approach is proposed for the a posteriori uncertainty estimation of PIV measurements. The measured velocity field is employed to determine the displacement of individual particle images. A disparity set is built from the residual distance between paired particle images of

  18. Comparison of measured and estimated maximum skin doses during CT fluoroscopy lung biopsies

    Energy Technology Data Exchange (ETDEWEB)

    Zanca, F., E-mail: Federica.Zanca@med.kuleuven.be [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium and Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven (Belgium); Jacobs, A. [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); Crijns, W. [Department of Radiotherapy, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); De Wever, W. [Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven, Belgium and Department of Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium)

    2014-07-15

    Purpose: To measure patient-specific maximum skin dose (MSD) associated with CT fluoroscopy (CTF) lung biopsies and to compare measured MSD with the MSD estimated from phantom measurements, as well as with the CTDIvol of patient examinations. Methods: Data from 50 patients with lung lesions who underwent a CT fluoroscopy-guided biopsy were collected. The CT protocol consisted of a low-kilovoltage (80 kV) protocol used in combination with an algorithm for dose reduction to the radiology staff during the interventional procedure, HandCare (HC). MSD was assessed during each intervention using EBT2 gafchromic films positioned on patient skin. Lesion size, position, total fluoroscopy time, and patient-effective diameter were registered for each patient. Dose rates were also estimated at the surface of a normal-size anthropomorphic thorax phantom using a 10 cm pencil ionization chamber placed at every 30°, for a full rotation, with and without HC. Measured MSD was compared with MSD values estimated from the phantom measurements and with the cumulative CTDIvol of the procedure. Results: The median measured MSD was 141 mGy (range 38–410 mGy) while the median cumulative CTDIvol was 72 mGy (range 24–262 mGy). The ratio between the MSD estimated from phantom measurements and the measured MSD was 0.87 (range 0.12–4.1) on average. In 72% of cases the estimated MSD underestimated the measured MSD, while in 28% of the cases it overestimated it. The same trend was observed for the ratio of cumulative CTDIvol and measured MSD. No trend was observed as a function of patient size. Conclusions: On average, estimated MSD from dose rate measurements on phantom as well as from CTDIvol of patient examinations underestimates the measured value of MSD. This can be attributed to deviations of the patient's body habitus from the standard phantom size and to patient positioning in the gantry during the procedure.

  19. Comparison of measured and estimated maximum skin doses during CT fluoroscopy lung biopsies

    International Nuclear Information System (INIS)

    Zanca, F.; Jacobs, A.; Crijns, W.; De Wever, W.

    2014-01-01

    Purpose: To measure patient-specific maximum skin dose (MSD) associated with CT fluoroscopy (CTF) lung biopsies and to compare measured MSD with the MSD estimated from phantom measurements, as well as with the CTDIvol of patient examinations. Methods: Data from 50 patients with lung lesions who underwent a CT fluoroscopy-guided biopsy were collected. The CT protocol consisted of a low-kilovoltage (80 kV) protocol used in combination with an algorithm for dose reduction to the radiology staff during the interventional procedure, HandCare (HC). MSD was assessed during each intervention using EBT2 gafchromic films positioned on patient skin. Lesion size, position, total fluoroscopy time, and patient-effective diameter were registered for each patient. Dose rates were also estimated at the surface of a normal-size anthropomorphic thorax phantom using a 10 cm pencil ionization chamber placed at every 30°, for a full rotation, with and without HC. Measured MSD was compared with MSD values estimated from the phantom measurements and with the cumulative CTDIvol of the procedure. Results: The median measured MSD was 141 mGy (range 38–410 mGy) while the median cumulative CTDIvol was 72 mGy (range 24–262 mGy). The ratio between the MSD estimated from phantom measurements and the measured MSD was 0.87 (range 0.12–4.1) on average. In 72% of cases the estimated MSD underestimated the measured MSD, while in 28% of the cases it overestimated it. The same trend was observed for the ratio of cumulative CTDIvol and measured MSD. No trend was observed as a function of patient size. Conclusions: On average, estimated MSD from dose rate measurements on phantom as well as from CTDIvol of patient examinations underestimates the measured value of MSD. This can be attributed to deviations of the patient's body habitus from the standard phantom size and to patient positioning in the gantry during the procedure

  20. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  1. A multitower measurement network estimate of California's methane emissions

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Seongeun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Hsu, Ying-Kuang [California Air Resources Board, Sacramento, CA (United States); Andrews, Arlyn E. [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States). Earth System Research Lab.; Bianco, Laura [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States). Earth System Research Lab.; Univ. of Colorado, Boulder, CO (United States). Cooperative Inst. for Research in Environmental Sciences; Vaca, Patrick [California Air Resources Board, Sacramento, CA (United States); Wilczak, James M. [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States). Earth System Research Lab.; Fischer, Marc L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; California State Univ. (CalState East Bay), Hayward, CA (United States). Dept. of Anthropology, Geography and Environmental Studies

    2013-09-20

    In this paper, we present an analysis of methane (CH4) emissions using atmospheric observations from five sites in California's Central Valley across different seasons (September 2010 to June 2011). CH4 emissions for spatial regions and source sectors are estimated by comparing measured CH4 mixing ratios with transport model (Weather Research and Forecasting and Stochastic Time-Inverted Lagrangian Transport) predictions based on two 0.1° CH4 (seasonally varying “California-specific” (California Greenhouse Gas Emission Measurements, CALGEM) and a static global (Emission Database for Global Atmospheric Research, release version 42, EDGAR42)) prior emission models. Region-specific Bayesian analyses indicate that for California's Central Valley, the CALGEM- and EDGAR42-based inversions provide consistent annual total CH4 emissions (32.87 ± 2.09 versus 31.60 ± 2.17 Tg CO2eq yr-1; 68% confidence interval (CI), assuming uncorrelated errors between regions). Summing across all regions of California, optimized CH4 emissions are only marginally consistent between CALGEM- and EDGAR42-based inversions (48.35 ± 6.47 versus 64.97 ± 11.85 Tg CO2eq), because emissions from coastal urban regions (where landfill and natural gas emissions are much higher in EDGAR than CALGEM) are not strongly constrained by the measurements. Combining our results with those from a recent study of the South Coast Air Basin narrows the range of estimates to 43–57 Tg CO2eq yr-1 (1.3–1.8 times higher than the current state inventory). Finally, these results suggest that the combination of rural and urban measurements will be necessary to verify future changes in California's total CH4 emissions.

  2. Store turnover as a predictor of food and beverage provider turnover and associated dietary intake estimates in very remote Indigenous communities.

    Science.gov (United States)

    Wycherley, Thomas; Ferguson, Megan; O'Dea, Kerin; McMahon, Emma; Liberato, Selma; Brimblecombe, Julie

    2016-12-01

    Determine how very-remote Indigenous community (RIC) food and beverage (F&B) turnover quantities and associated dietary intake estimates derived from only stores, compare with values derived from all community F&B providers. F&B turnover quantity and associated dietary intake estimates (energy, micro/macronutrients and major contributing food types) were derived from 12-months transaction data of all F&B providers in three RICs (NT, Australia). F&B turnover quantities and dietary intake estimates from only stores (plus only the primary store in multiple-store communities) were expressed as a proportion of complete F&B provider turnover values. Food types and macronutrient distribution (%E) estimates were quantitatively compared. Combined stores F&B turnover accounted for the majority of F&B quantity (98.1%) and absolute dietary intake estimates (energy [97.8%], macronutrients [≥96.7%] and micronutrients [≥83.8%]). Macronutrient distribution estimates from combined stores and only the primary store closely aligned complete provider estimates (≤0.9% absolute). Food types were similar using combined stores, primary store or complete provider turnover. Evaluating combined stores F&B turnover represents an efficient method to estimate total F&B turnover quantity and associated dietary intake in RICs. In multiple-store communities, evaluating only primary store F&B turnover provides an efficient estimate of macronutrient distribution and major food types. © 2016 Public Health Association of Australia.

  3. Estimating the development assistance for health provided to faith-based organizations, 1990-2013.

    Science.gov (United States)

    Haakenstad, Annie; Johnson, Elizabeth; Graves, Casey; Olivier, Jill; Duff, Jean; Dieleman, Joseph L

    2015-01-01

    Faith-based organizations (FBOs) have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH) channeled through these organizations. Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs. In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs. Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.

  4. Estimating the development assistance for health provided to faith-based organizations, 1990-2013.

    Directory of Open Access Journals (Sweden)

    Annie Haakenstad

    Full Text Available Faith-based organizations (FBOs have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH channeled through these organizations.Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs.In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs.Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.

  5. Estimation of atomic interaction parameters by quantum measurements

    DEFF Research Database (Denmark)

    Kiilerich, Alexander Holm; Mølmer, Klaus

    Quantum systems, ranging from atomic systems to field modes and mechanical devices are useful precision probes for a variety of physical properties and phenomena. Measurements by which we extract information about the evolution of single quantum systems yield random results and cause a back actio...... strategies, we address the Fisher information and the Cramér-Rao sensitivity bound. We investigate monitoring by photon counting, homodyne detection and frequent projective measurements respectively, and exemplify by Rabi frequency estimation in a driven two-level system....

  6. Uncertainty estimation and multi sensor fusion for kinematic laser tracker measurements

    Science.gov (United States)

    Ulrich, Thomas

    2013-08-01

    Laser trackers are widely used to measure kinematic tasks such as tracking robot movements. Common methods to evaluate the uncertainty in the kinematic measurement include approximations specified by the manufacturers, various analytical adjustment methods and the Kalman filter. In this paper a new, real-time technique is proposed, which estimates the 4D-path (3D-position + time) uncertainty of an arbitrary path in space. Here a hybrid system estimator is applied in conjunction with the kinematic measurement model. This method can be applied to processes, which include various types of kinematic behaviour, constant velocity, variable acceleration or variable turn rates. The new approach is compared with the Kalman filter and a manufacturer's approximations. The comparison was made using data obtained by tracking an industrial robot's tool centre point with a Leica laser tracker AT901 and a Leica laser tracker LTD500. It shows that the new approach is more appropriate to analysing kinematic processes than the Kalman filter, as it reduces overshoots and decreases the estimated variance. In comparison with the manufacturer's approximations, the new approach takes account of kinematic behaviour with an improved description of the real measurement process and a reduction in estimated variance. This approach is therefore well suited to the analysis of kinematic processes with unknown changes in kinematic behaviour as well as the fusion among laser trackers.

  7. Discharge estimation combining flow routing and occasional measurements of velocity

    Directory of Open Access Journals (Sweden)

    G. Corato

    2011-09-01

    Full Text Available A new procedure is proposed for estimating river discharge hydrographs during flood events, using only water level data at a single gauged site, as well as 1-D shallow water modelling and occasional maximum surface flow velocity measurements. One-dimensional diffusive hydraulic model is used for routing the recorded stage hydrograph in the channel reach considering zero-diffusion downstream boundary condition. Based on synthetic tests concerning a broad prismatic channel, the "suitable" reach length is chosen in order to minimize the effect of the approximated downstream boundary condition on the estimation of the upstream discharge hydrograph. The Manning's roughness coefficient is calibrated by using occasional instantaneous surface velocity measurements during the rising limb of flood that are used to estimate instantaneous discharges by adopting, in the flow area, a two-dimensional velocity distribution model. Several historical events recorded in three gauged sites along the upper Tiber River, wherein reliable rating curves are available, have been used for the validation. The outcomes of the analysis can be summarized as follows: (1 the criterion adopted for selecting the "suitable" channel length based on synthetic test studies has proved to be reliable for field applications to three gauged sites. Indeed, for each event a downstream reach length not more than 500 m is found to be sufficient, for a good performances of the hydraulic model, thereby enabling the drastic reduction of river cross-sections data; (2 the procedure for Manning's roughness coefficient calibration allowed for high performance in discharge estimation just considering the observed water levels and occasional measurements of maximum surface flow velocity during the rising limb of flood. Indeed, errors in the peak discharge magnitude, for the optimal calibration, were found not exceeding 5% for all events observed in the three investigated gauged sections, while the

  8. Viscosity estimation utilizing flow velocity field measurements in a rotating magnetized plasma

    International Nuclear Information System (INIS)

    Yoshimura, Shinji; Tanaka, Masayoshi Y.

    2008-01-01

    The importance of viscosity in determining plasma flow structures has been widely recognized. In laboratory plasmas, however, viscosity measurements have been seldom performed so far. In this paper we present and discuss an estimation method of effective plasma kinematic viscosity utilizing flow velocity field measurements. Imposing steady and axisymmetric conditions, we derive the expression for radial flow velocity from the azimuthal component of the ion fluid equation. The expression contains kinematic viscosity, vorticity of azimuthal rotation and its derivative, collision frequency, azimuthal flow velocity and ion cyclotron frequency. Therefore all quantities except the viscosity are given provided that the flow field can be measured. We applied this method to a rotating magnetized argon plasma produced by the Hyper-I device. The flow velocity field measurements were carried out using a directional Langmuir probe installed in a tilting motor drive unit. The inward ion flow in radial direction, which is not driven in collisionless inviscid plasmas, was clearly observed. As a result, we found the anomalous viscosity, the value of which is two orders of magnitude larger than the classical one. (author)

  9. Indirect Estimation of Selected Measures of Fertility and Marital ...

    African Journals Online (AJOL)

    DLHS6

    2018-01-09

    Jan 9, 2018 ... marital status distribution data of India especially of the 2011 census in deriving indirectly the fertility measures .... 2011 Census, Economic and Political weekly, EPW Vol. ... Indirect Estimates of Total Fertility Rate Using Child.

  10. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework. PMID:28522983

  11. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  12. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Directory of Open Access Journals (Sweden)

    Ji Chul Kim

    2017-05-01

    Full Text Available Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  13. Greenhouse gases regional fluxes estimated from atmospheric measurements

    International Nuclear Information System (INIS)

    Messager, C.

    2007-07-01

    build up a new system to measure continuously CO 2 (or CO), CH 4 , N 2 O and SF 6 mixing ratios. It is based on a commercial gas chromatograph (Agilent 6890N) which have been modified to reach better precision. Reproducibility computed with a target gas on a 24 hours time step gives: 0.06 ppm for CO 2 , 1.4 ppb for CO, 0.7 ppb for CH 4 , 0.2 ppb for N 2 O and 0.05 ppt for SF 6 . The instrument's run is fully automated, an air sample analysis takes about 5 minutes. In July 2006, I install instrumentation on a telecommunication tall tower (200 m) situated near Orleans forest in Trainou, to monitor continuously greenhouse gases (CO 2 , CH 4 , N 2 O, SF 6 ), atmospheric tracers (CO, Radon-222) and meteorological parameters. Intake lines were installed at 3 levels (50, 100 and 180 m) and allow us to sample air masses along the vertical. Continuous measurement started in January 2007. I used Mace Head (Ireland) and Gif-sur-Yvette continuous measurements to estimate major greenhouse gases emission fluxes at regional scale. To make the link between atmospheric measurements and surface fluxes, we need to quantify dilution due to atmospheric transport. I used Radon-222 as tracer (radon tracer method) and planetary boundary layer heights estimates from ECMWF model (boundary layer budget method) to parameterize atmospheric transport. In both cases I compared results to available emission inventories. (author)

  14. Influence of measurement errors and estimated parameters on combustion diagnosis

    International Nuclear Information System (INIS)

    Payri, F.; Molina, S.; Martin, J.; Armas, O.

    2006-01-01

    Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors

  15. Estimation of thermal transmittance based on temperature measurements with the application of perturbation numbers

    Science.gov (United States)

    Nowoświat, Artur; Skrzypczyk, Jerzy; Krause, Paweł; Steidl, Tomasz; Winkler-Skalna, Agnieszka

    2018-05-01

    Fast estimation of thermal transmittance based on temperature measurements is uncertain, and the obtained results can be burdened with a large error. Nevertheless, such attempts should be undertaken merely due to the fact that a precise measurement by means of heat flux measurements is not always possible in field conditions (resentment of the residents during the measurements carried out inside their living quarters), and the calculation methods do not allow for the nonlinearity of thermal insulation, heat bridges or other fragments of building envelope of diversified thermal conductivity. The present paper offers the estimation of thermal transmittance and internal surface resistance with the use of temperature measurements (in particular with the use of thermovision). The proposed method has been verified through tests carried out on a laboratory test stand built in the open space, subjected to the influence of real meteorological conditions. The present elaboration involves the estimation of thermal transmittance by means of temperature measurements. Basing on the mentioned estimation, the authors present correction coefficients which have impact on the estimation accuracy. Furthermore, in the final part of the paper, various types of disturbance were allowed for using perturbation numbers, and the introduced by the authors "credibility area of thermal transmittance estimation" was determined.

  16. Associations between self-estimated and measured physical fitness among 40-year-old men and women.

    Science.gov (United States)

    Mikkelsson, L; Kaprio, J; Kautiainen, H; Kujala, U M; Nupponen, H

    2005-10-01

    The aim was to evaluate whether 40-year-old men and women are able to estimate their level of fitness compared with actual measured physical fitness. Twenty-nine men and 35 women first completed a questionnaire at home and then their physical fitness was measured at laboratory. The index of self-estimated physical fitness was calculated by summing up the scores of self-estimated endurance, strength, speed and flexibility. The index of self-estimated endurance was calculated by summing up the scores of self-estimated endurance and those of the self-estimated distance they could run, cycle, ski and walk. The index of measured physical fitness was calculated by summing up the z-scores of a submaximal bicycle ergometer test, ergojump tests (counter-movement jump and jumping in 15 s), a 30-s sit-up test, hand-grip tests and a sit-and-reach test. The correlation (Spearman) between the indices of self-estimated and measured physical fitness was 0.54 for both sexes, and that between self-estimated endurance and measured endurance was 0.53 for both sexes. Maximal oxygen uptake estimated based on submaximal ergometer test was higher among those with longer self-estimated distance of running, cycling, skiing and walking (P for linear trend ski or walk. However, in some individuals self-estimation of fitness is not in agreement with the results of fitness tests.

  17. Refining estimates of public health spending as measured in national health expenditure accounts: the Canadian experience.

    Science.gov (United States)

    Ballinger, Geoff

    2007-01-01

    The recent focus on public health stemming from, among other things, severe acute respiratory syndrome and avian flu has created an imperative to refine health-spending estimates in the Canadian Health Accounts. This article presents the Canadian experience in attempting to address the challenges associated with developing the needed taxonomies for systematically capturing, measuring, and analyzing the national investment in the Canadian public health system. The first phase of this process was completed in 2005, which was a 2-year project to estimate public health spending based on a more classic definition by removing the administration component of the previously combined public health and administration category. Comparing the refined public health estimate with recent data from the Organization for Economic Cooperation and Development still positions Canada with the highest share of total health expenditure devoted to public health than any other country reporting. The article also provides an analysis of the comparability of public health estimates across jurisdictions within Canada as well as a discussion of the recommendations for ongoing improvement of public health spending estimates. The Canadian Institute for Health Information is an independent, not-for-profit organization that provides Canadians with essential statistics and analysis on the performance of the Canadian health system, the delivery of healthcare, and the health status of Canadians. The Canadian Institute for Health Information administers more than 20 databases and registries, including Canada's Health Accounts, which tracks historically 40 categories of health spending by 5 sources of finance for 13 provincial and territorial jurisdictions. Until 2005, expenditure on public health services in the Canadian Health Accounts included measures to prevent the spread of communicable disease, food and drug safety, health inspections, health promotion, community mental health programs, public

  18. Height and Weight Estimation From Anthropometric Measurements Using Machine Learning Regressions.

    Science.gov (United States)

    Rativa, Diego; Fernandes, Bruno J T; Roque, Alexandre

    2018-01-01

    Height and weight are measurements explored to tracking nutritional diseases, energy expenditure, clinical conditions, drug dosages, and infusion rates. Many patients are not ambulant or may be unable to communicate, and a sequence of these factors may not allow accurate estimation or measurements; in those cases, it can be estimated approximately by anthropometric means. Different groups have proposed different linear or non-linear equations which coefficients are obtained by using single or multiple linear regressions. In this paper, we present a complete study of the application of different learning models to estimate height and weight from anthropometric measurements: support vector regression, Gaussian process, and artificial neural networks. The predicted values are significantly more accurate than that obtained with conventional linear regressions. In all the cases, the predictions are non-sensitive to ethnicity, and to gender, if more than two anthropometric parameters are analyzed. The learning model analysis creates new opportunities for anthropometric applications in industry, textile technology, security, and health care.

  19. Inclusion of Topological Measurements into Analytic Estimates of Effective Permeability in Fractured Media

    Science.gov (United States)

    Sævik, P. N.; Nixon, C. W.

    2017-11-01

    We demonstrate how topology-based measures of connectivity can be used to improve analytical estimates of effective permeability in 2-D fracture networks, which is one of the key parameters necessary for fluid flow simulations at the reservoir scale. Existing methods in this field usually compute fracture connectivity using the average fracture length. This approach is valid for ideally shaped, randomly distributed fractures, but is not immediately applicable to natural fracture networks. In particular, natural networks tend to be more connected than randomly positioned fractures of comparable lengths, since natural fractures often terminate in each other. The proposed topological connectivity measure is based on the number of intersections and fracture terminations per sampling area, which for statistically stationary networks can be obtained directly from limited outcrop exposures. To evaluate the method, numerical permeability upscaling was performed on a large number of synthetic and natural fracture networks, with varying topology and geometry. The proposed method was seen to provide much more reliable permeability estimates than the length-based approach, across a wide range of fracture patterns. We summarize our results in a single, explicit formula for the effective permeability.

  20. Simultaneous state-parameter estimation supports the evaluation of data assimilation performance and measurement design for soil-water-atmosphere-plant system

    Science.gov (United States)

    Hu, Shun; Shi, Liangsheng; Zha, Yuanyuan; Williams, Mathew; Lin, Lin

    2017-12-01

    Improvements to agricultural water and crop managements require detailed information on crop and soil states, and their evolution. Data assimilation provides an attractive way of obtaining these information by integrating measurements with model in a sequential manner. However, data assimilation for soil-water-atmosphere-plant (SWAP) system is still lack of comprehensive exploration due to a large number of variables and parameters in the system. In this study, simultaneous state-parameter estimation using ensemble Kalman filter (EnKF) was employed to evaluate the data assimilation performance and provide advice on measurement design for SWAP system. The results demonstrated that a proper selection of state vector is critical to effective data assimilation. Especially, updating the development stage was able to avoid the negative effect of ;phenological shift;, which was caused by the contrasted phenological stage in different ensemble members. Simultaneous state-parameter estimation (SSPE) assimilation strategy outperformed updating-state-only (USO) assimilation strategy because of its ability to alleviate the inconsistency between model variables and parameters. However, the performance of SSPE assimilation strategy could deteriorate with an increasing number of uncertain parameters as a result of soil stratification and limited knowledge on crop parameters. In addition to the most easily available surface soil moisture (SSM) and leaf area index (LAI) measurements, deep soil moisture, grain yield or other auxiliary data were required to provide sufficient constraints on parameter estimation and to assure the data assimilation performance. This study provides an insight into the response of soil moisture and grain yield to data assimilation in SWAP system and is helpful for soil moisture movement and crop growth modeling and measurement design in practice.

  1. The international food unit: a new measurement aid that can improve portion size estimation.

    Science.gov (United States)

    Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M

    2017-09-12

    Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.

  2. Estimation of Uncertainty in Tracer Gas Measurement of Air Change Rates

    Directory of Open Access Journals (Sweden)

    Atsushi Iizuka

    2010-12-01

    Full Text Available Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of

  3. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Ultra-small time-delay estimation via a weak measurement technique with post-selection

    International Nuclear Information System (INIS)

    Fang, Chen; Huang, Jing-Zheng; Yu, Yang; Li, Qinzheng; Zeng, Guihua

    2016-01-01

    Weak measurement is a novel technique for parameter estimation with higher precision. In this paper we develop a general theory for the parameter estimation based on a weak measurement technique with arbitrary post-selection. The weak-value amplification model and the joint weak measurement model are two special cases in our theory. Applying the developed theory, time-delay estimation is investigated in both theory and experiments. The experimental results show that when the time delay is ultra-small, the joint weak measurement scheme outperforms the weak-value amplification scheme, and is robust against not only misalignment errors but also the wavelength dependence of the optical components. These results are consistent with theoretical predictions that have not been previously verified by any experiment. (paper)

  5. Use of instantaneous streamflow measurements to improve regression estimates of index flow for the summer month of lowest streamflow in Michigan

    Science.gov (United States)

    Holtschlag, David J.

    2011-01-01

    measurement. This process was repeated to develop a set of DiscQ50 estimates for all simulated instantaneous measurements, a weighted DiscQ50 estimate was formed from this set. Results indicated that the expected value of this weighted estimate was more precise than the IndxQ50 estimate for all measurement intensities evaluated. The integrated index-flow estimator, IntgQ50, was formed by computing a weighted average of the index estimate IndxQ50 and the DiscQ50 estimate. Results indicated that the IntgQ50 estimator was more precise than the DiscQ50 estimator at low measurement intensities of one to two measurements. At greater measurement intensities, the precision of the IntgQ50 estimator converges to the DiscQ50 estimator. Neither the DiscQ50 nor the IntgQ50 estimators provided site-specific estimates. In particular, although expected values of DiscQ50 and IntgQ50 estimates converge with increasing measurement intensity, they do not necessarily converge to the site-specific value of Q50. The site estimator of flow, SiteQ50, was developed to facilitate this convergence at higher measurement intensities. This is accomplished by use of the median of simulated instantaneous flow values for each measurement intensity level. A weighted estimate of the median and information associated with the IntgQ50 estimate was used to form the SiteQ50 estimate. Initial simulations indicate that the SiteQ50 estimator generally has greater precision than the IntgQ50 estimator at measurement intensities greater than 3, however, additional analysis is needed to identify streamflow conditions under which instantaneous measurements will produce estimates that generally converge to the index flows. A preliminary augmented index regression equation was developed, which contains the index regression estimate and two additional variables associated with base-flow recession characteristics. When these recession variables were estimated as the medians of recession parameters compute

  6. Estimating body weight and body composition of chickens by using noninvasive measurements.

    Science.gov (United States)

    Latshaw, J D; Bishop, B L

    2001-07-01

    The major objective of this research was to develop equations to estimate BW and body composition using measurements taken with inexpensive instruments. We used five groups of chickens that were created with different genetic stocks and feeding programs. Four of the five groups were from broiler genetic stock, and one was from sex-linked heavy layers. The goal was to sample six males from each group when the group weight was 1.20, 1.75, and 2.30 kg. Each male was weighed and measured for back length, pelvis width, circumference, breast width, keel length, and abdominal skinfold thickness. A cloth tape measure, calipers, and skinfold calipers were used for measurement. Chickens were scanned for total body electrical conductivity (TOBEC) before being euthanized and frozen. Six females were selected at weights similar to those for males and were measured in the same way. Each whole chicken was ground, and a portion of ground material of each was used to measure water, fat, ash, and energy content. Multiple linear regression was used to estimate BW from body measurements. The best single measurement was pelvis width, with an R2 = 0.67. Inclusion of three body measurements in an equation resulted in R2 = 0.78 and the following equation: BW (g) = -930.0 + 68.5 (breast, cm) + 48.5 (circumference, cm) + 62.8 (pelvis, cm). The best single measurement to estimate body fat was abdominal skinfold thickness, expressed as a natural logarithm. Inclusion of weight and skinfold thickness resulted in R2 = 0.63 for body fat according to the following equation: fat (%) = 24.83 + 6.75 (skinfold, ln cm) - 3.87 (wt, kg). Inclusion of the result of TOBEC and the effect of sex improved the R2 to 0.78 for body fat. Regression analysis was used to develop additional equations, based on fat, to estimate water and energy contents of the body. The body water content (%) = 72.1 - 0.60 (body fat, %), and body energy (kcal/g) = 1.097 + 0.080 (body fat, %). The results of the present study

  7. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...... with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result...

  8. Implementation of a metrology programme to provide traceability for radionuclides activity measurements in the CNEN Radiopharmaceuticals Producers Centers

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Erica A.L. de; Braghirolli, Ana M.S.; Tauhata, Luiz; Gomes, Regio S.; Silva, Carlos J., E-mail: erica@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Delgado, Jose U.; Oliveira, Antonio E.; Iwahara, Akira, E-mail: ealima@ird.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    The commercialization and use of radiopharmaceuticals in Brazil are regulated by Agencia Nacional de Vigilancia Sanitaria (ANVISA) which require Good Manufacturing Practices (GMP) certification for Radiopharmaceuticals Producer Centers. Quality Assurance Program should implement the GMP standards to ensure radiopharmaceuticals have requirements quality to proving its efficiency. Several aspects should be controlled within the Quality Assurance Programs, and one of them is the traceability of the Radionuclides Activity Measurement in radiopharmaceuticals doses. The quality assurance of activity measurements is fundamental to maintain both the efficiency of the nuclear medicine procedures and patient and exposed occupationally individuals safety. The radiation doses received by patients, during the nuclear medicine procedures, is estimated according to administered radiopharmaceuticals quantity. Therefore it is very important either the activity measurements performed in radiopharmaceuticals producer centers (RPC) as the measurements performed in nuclear medicine services are traceable to national standards. This paper aims to present an implementation program to provide traceability to radionuclides activity measurements performed in the dose calibrators(well type ionization chambers) used in Radiopharmaceuticals Producer Center placed in different states in Brazil. The proposed program is based on the principles of GM Pand ISO 17025 standards. According to dose calibrator performance, the RPC will be able to provide consistent, safe and effective radioactivity measurement to the nuclear medicine services. (author)

  9. Fixed-flexion radiography of the knee provides reproducible joint space width measurements in osteoarthritis

    International Nuclear Information System (INIS)

    Kothari, Manish; Sieffert, Martine; Block, Jon E.; Peterfy, Charles G.; Guermazi, Ali; Ingersleben, Gabriele von; Miaux, Yves; Stevens, Randall

    2004-01-01

    The validity of a non-fluoroscopic fixed-flexion radiographic acquisition and analysis protocol for measurement of joint space width (JSW) in knee osteoarthritis is determined. A cross-sectional study of 165 patients with documented knee osteoarthritis participating in a multicenter, prospective study of chondroprotective agents was performed. All patients had posteroanterior, weight-bearing, fixed-flexion radiography with 10 caudal beam angulation. A specially designed frame (SynaFlexer) was used to standardize the positioning. Minimum medial and lateral JSW were measured manually and twice by an automated analysis system to determine inter-technique and intra-reader concordance and reliability. A random subsample of 30 patients had repeat knee radiographs 2 weeks apart to estimate short-term reproducibility using automated analysis. Concordance between manual and automated medial JSW measurements was high (ICC=0.90); lateral compartment measurements showed somewhat less concordance (ICC=0.72). There was excellent concordance between repeated automated JSW measurements performed 6 months apart for the medial (ICC=0.94) and lateral (ICC=0.86) compartments. Short-term reproducibility for the subsample of 30 cases with repeat acquisitions demonstrated an average SD of 0.14 mm for medial JSW (CV=4.3%) and 0.23 mm for lateral JSW (CV=4.0%). Fixed-flexion radiography of the knee using a positioning device provides consistent, reliable and reproducible measurement of minimum JSW in knee osteoarthritis without the need for concurrent fluoroscopic guidance. (orig.)

  10. Estimating the Cost of Providing Foundational Public Health Services.

    Science.gov (United States)

    Mamaril, Cezar Brian C; Mays, Glen P; Branham, Douglas Keith; Bekemeier, Betty; Marlowe, Justin; Timsina, Lava

    2017-12-28

    To estimate the cost of resources required to implement a set of Foundational Public Health Services (FPHS) as recommended by the Institute of Medicine. A stochastic simulation model was used to generate probability distributions of input and output costs across 11 FPHS domains. We used an implementation attainment scale to estimate costs of fully implementing FPHS. We use data collected from a diverse cohort of 19 public health agencies located in three states that implemented the FPHS cost estimation methodology in their agencies during 2014-2015. The average agency incurred costs of $48 per capita implementing FPHS at their current attainment levels with a coefficient of variation (CV) of 16 percent. Achieving full FPHS implementation would require $82 per capita (CV=19 percent), indicating an estimated resource gap of $34 per capita. Substantial variation in costs exists across communities in resources currently devoted to implementing FPHS, with even larger variation in resources needed for full attainment. Reducing geographic inequities in FPHS may require novel financing mechanisms and delivery models that allow health agencies to have robust roles within the health system and realize a minimum package of public health services for the nation. © Health Research and Educational Trust.

  11. Reproducibility of estimation of blood flow in the human masseter muscle from measurements of 133Xe clearance

    International Nuclear Information System (INIS)

    Monteiro, A.A.; Kopp, S.

    1989-01-01

    The reproducibility of estimations of the masseter intramuscular blood flow (IMBF) was assessed bilaterally within and between clinical sessions. The 133 Xe clearance in nine normal individuals was measured before, during, and immediately after endurance of isometric contraction at an attempted level of 50% of maximum voluntary clenching contraction. An overall low reproducibility of the estimations was found. This result was probably caused by uncertainties about the excact site of intramuscular 133 Xe deposition, errors in assessment of the plots of clearance, and variabilities in the relative contraction levels sustained, especially in the overall muscle effort. In agreement with previous reports concerning other skeletal muscles, the 133 Xe clearance method provided inconsistent estimates of absolute values of IMBF also in this clinical setting. Although there was a high intra-individual variation in the relative level of isometric contraction sustained, the endurance test induced distinct changes in IMBF, among which the estimate of post-endurance hyperemia was the most consistent for each individual. Therefore, measurements of 133 Xe clearance seem to be useful to detect intra-induvidual changes in masseter IMBF resulting from isometric work. 21 refs

  12. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  13. The concept of estimation of elevator shaft control measurement results in the local 3D coordinate system

    Directory of Open Access Journals (Sweden)

    Filipiak-Kowszyk Daria

    2018-01-01

    Full Text Available Geodetic control measurements play an important part because they provide information about the current state of repair of the construction, which has a direct impact on the safety assessment of its exploitation. Authors in this paper have focused on control measurements of the elevator shaft. The article discusses the problem of determining the deviation of elevator shaft walls from the vertical plane in the local 3D coordinate system. It presents a concept of estimation of measurements results base on the parametric method with conditions on parameters. The simulated measurement results were used to verify the concept presented in the paper.

  14. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    Science.gov (United States)

    Smaluk, Victor; Fielder, Richard; Blednykh, Alexei; Rehm, Guenther; Bartolini, Riccardo

    2014-07-01

    One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID) straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  15. Estimation of uncertainty of measurements of 3D mechanisms after kinematic calibration

    International Nuclear Information System (INIS)

    Takamasu, K; Sato, O; Shimojima, K; Takahashi, S; Furutani, R

    2005-01-01

    Calibration methods for 3D mechanisms are necessary to use the mechanisms as coordinate measuring machines. The calibration method of coordinate measuring machine using artifacts, the artifact calibration method, is proposed in taking account of traceability of the mechanism. There are kinematic parameters and form-deviation parameters in geometric parameters for describing the forward kinematic of the mechanism. In this article, the estimation methods of uncertainties using the calibrated coordinate measuring machine after the calibration are formulated. Firstly, the calculation method which takes out the values of kinematic parameters using least squares method is formulated. Secondly, the estimation value of uncertainty of the measuring machine is calculated using the error propagation method

  16. Estimations of On-site Directional Wave Spectra from Measured Ship Responses

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2006-01-01

    include an quivalence of energy in the governing equations and, as regards the parametric concept, a frequency dependent spreading of the waves is introduced. The paper includes an extensive analysis of full-scale measurements for which the directional wave spectra are estimated by the two ship response......In general, two main concepts can be applied to estimate the on-site directional wave spectrum on the basis of ship response measurements: 1) a parametric method which assumes the wave spectrum to be composed by parameterised wave spectra, or 2) a non-parametric method where the directional wave...

  17. Validity of parent-reported weight and height of preschool children measured at home or estimated without home measurement: a validation study

    Directory of Open Access Journals (Sweden)

    Cox Bianca

    2011-07-01

    Full Text Available Abstract Background Parental reports are often used in large-scale surveys to assess children's body mass index (BMI. Therefore, it is important to know to what extent these parental reports are valid and whether it makes a difference if the parents measured their children's weight and height at home or whether they simply estimated these values. The aim of this study is to compare the validity of parent-reported height, weight and BMI values of preschool children (3-7 y-old, when measured at home or estimated by parents without actual measurement. Methods The subjects were 297 Belgian preschool children (52.9% male. Participation rate was 73%. A questionnaire including questions about height and weight of the children was completed by the parents. Nurses measured height and weight following standardised procedures. International age- and sex-specific BMI cut-off values were employed to determine categories of weight status and obesity. Results On the group level, no important differences in accuracy of reported height, weight and BMI were identified between parent-measured or estimated values. However, for all 3 parameters, the correlations between parental reports and nurse measurements were higher in the group of children whose body dimensions were measured by the parents. Sensitivity for underweight and overweight/obesity were respectively 73% and 47% when parents measured their child's height and weight, and 55% and 47% when parents estimated values without measurement. Specificity for underweight and overweight/obesity were respectively 82% and 97% when parents measured the children, and 75% and 93% with parent estimations. Conclusions Diagnostic measures were more accurate when parents measured their child's weight and height at home than when those dimensions were based on parental judgements. When parent-reported data on an individual level is used, the accuracy could be improved by encouraging the parents to measure weight and height

  18. Assessment of a Technique for Estimating Total Column Water Vapor Using Measurements of the Infrared Sky Temperature

    Science.gov (United States)

    Merceret, Francis J.; Huddleston, Lisa L.

    2014-01-01

    A method for estimating the integrated precipitable water (IPW) content of the atmosphere using measurements of indicated infrared zenith sky temperature was validated over east-central Florida. The method uses inexpensive, commercial off the shelf, hand-held infrared thermometers (IRT). Two such IRTs were obtained from a commercial vendor, calibrated against several laboratory reference sources at KSC, and used to make IR zenith sky temperature measurements in the vicinity of KSC and Cape Canaveral Air Force Station (CCAFS). The calibration and comparison data showed that these inexpensive IRTs provided reliable, stable IR temperature measurements that were well correlated with the NOAA IPW observations.

  19. A comparative study of satellite estimation for solar insolation in Albania with ground measurements

    International Nuclear Information System (INIS)

    Mitrushi, Driada; Berberi, Pëllumb; Muda, Valbona; Buzra, Urim; Bërdufi, Irma; Topçiu, Daniela

    2016-01-01

    The main objective of this study is to compare data provided by Database of NASA with available ground data for regions covered by national meteorological net NASA estimates that their measurements of average daily solar radiation have a root-mean-square deviation RMSD error of 35 W/m"2 (roughly 20% inaccuracy). Unfortunately valid data from meteorological stations for regions of interest are quite rare in Albania. In these cases, use of Solar Radiation Database of NASA would be a satisfactory solution for different case studies. Using a statistical method allows to determine most probable margins between to sources of data. Comparison of mean insulation data provided by NASA with ground data of mean insulation provided by meteorological stations show that ground data for mean insolation results, in all cases, to be underestimated compared with data provided by Database of NASA. Converting factor is 1.149.

  20. Stature estimation using the knee height measurement amongst Brazilian elderly

    OpenAIRE

    Siqueira Fogal, Aline; Franceschini, Sylvia do Carmo Castro; Eloiza Priore, Silvia; Cotta, Rosângela Minardi M.; Queiroz Ribeiro, Andreia

    2015-01-01

    Introduction: Stature is an important variable in several indices of nutritional status that are applicable to elderly persons. However, stature is difficult or impossible to measure in elderly because they are often unable to maintain the standing position. A alternative is the use of estimated height from measurements of knee height measure. Aims: This study aimed to evaluate the accuracy of the formula proposed by Chumlea et al. (1985) based on the knee of a Caucasian population to estimat...

  1. Estimating the Development Assistance for Health Provided to Faith-Based Organizations, 1990–2013

    Science.gov (United States)

    Haakenstad, Annie; Johnson, Elizabeth; Graves, Casey; Olivier, Jill; Duff, Jean; Dieleman, Joseph L.

    2015-01-01

    Background Faith-based organizations (FBOs) have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH) channeled through these organizations. Material and Methods Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs. Results In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund’s contributions to NGOs. In 2011, the Gates Foundation’s contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs. Conclusion Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health. PMID:26042731

  2. Measuring HIV-related stigma among healthcare providers: a systematic review.

    Science.gov (United States)

    Alexandra Marshall, S; Brewington, Krista M; Kathryn Allison, M; Haynes, Tiffany F; Zaller, Nickolas D

    2017-11-01

    In the United States, HIV-related stigma in the healthcare setting is known to affect the utilization of prevention and treatment services. Multiple HIV/AIDS stigma scales have been developed to assess the attitudes and behaviors of the general population in the U.S. towards people living with HIV/AIDS, but fewer scales have been developed to assess HIV-related stigma among healthcare providers. This systematic review aimed to identify and evaluate the measurement tools used to assess HIV stigma among healthcare providers in the U.S. The five studies selected quantitatively assessed the perceived HIV stigma among healthcare providers from the patient or provider perspective, included HIV stigma as a primary outcome, and were conducted in the U.S. These five studies used adapted forms of four HIV stigma scales. No standardized measure was identified. Assessment of HIV stigma among providers is valuable to better understand how this phenomenon may impact health outcomes and to inform interventions aiming to improve healthcare delivery and utilization.

  3. REKF and RUKF for pico satellite attitude estimation in the presence of measurement faults

    Institute of Scientific and Technical Information of China (English)

    Halil Ersin Söken; Chingiz Hajiyev

    2014-01-01

    When a pico satel ite is under normal operational condi-tions, whether it is extended or unscented, a conventional Kalman filter gives sufficiently good estimation results. However, if the measurements are not reliable because of any kind of malfunc-tions in the estimation system, the Kalman filter gives inaccurate results and diverges by time. This study compares two different robust Kalman filtering algorithms, robust extended Kalman filter (REKF) and robust unscented Kalman filter (RUKF), for the case of measurement malfunctions. In both filters, by the use of de-fined variables named as the measurement noise scale factor, the faulty measurements are taken into the consideration with a smal weight, and the estimations are corrected without affecting the characteristic of the accurate ones. The proposed robust Kalman filters are applied for the attitude estimation process of a pico satel-lite, and the results are compared.

  4. Measurements for kinetic parameters estimation in the RA-0 research reactor

    International Nuclear Information System (INIS)

    Gomez, A; Bellino, P A

    2012-01-01

    In the present work, measurements based on the neutron noise technique and the inverse kinetic method were performed to estimate the different kinetic parameters of the reactor in its critical state. By means of the neutron noise technique, we obtained the current calibration factor of the ionization chamber M6 belonging to the power range channels of the reactor instrumentation. The maximum current allowed compatible with the maximum power authorized by the operation license was also obtained. Using the neutron noise technique, the reduced mean reproduction time (Λ*) was estimated. This parameter plays a fundamental role in the deterministic analysis of criticality accidents. Comparison with previous values justified performing new measurements to study systematic trends in the value of Λ*. Using the inverse kinetics method, the reactivity worth of the control rods was estimated, confirming the existence of spatial effects and trends previously observed (author)

  5. Detecting Topological Errors with Pre-Estimation Filtering of Bad Data in Wide-Area Measurements

    DEFF Research Database (Denmark)

    Møller, Jakob Glarbo; Sørensen, Mads; Jóhannsson, Hjörtur

    2017-01-01

    It is expected that bad data and missing topology information will become an issue of growing concern when power system state estimators are to exploit the high measurement reporting rates from phasor measurement units. This paper suggests to design state estimators with enhanced resilience again...

  6. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    Directory of Open Access Journals (Sweden)

    Victor Smaluk

    2014-07-01

    Full Text Available One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  7. Estimating product-to-product variations in metal forming using force measurements

    NARCIS (Netherlands)

    Havinga, Gosse Tjipke; Van Den Boogaard, Ton

    2017-01-01

    The limits of production accuracy of metal forming processes can be stretched by the development of control systems for compensation of product-to-product variations. Such systems require the use of measurements from each semi-finished product. These measurements must be used to estimate the final

  8. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units

    Directory of Open Access Journals (Sweden)

    Sang Cheol Lee

    2016-12-01

    Full Text Available This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter.

  9. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units

    Science.gov (United States)

    Lee, Sang Cheol; Hong, Sung Kyung

    2016-01-01

    This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter. PMID:27973429

  10. Using NDACC column measurements of carbonyl sulfide to estimate its sources and sinks

    Science.gov (United States)

    Wang, Yuting; Marshall, Julia; Palm, Mathias; Deutscher, Nicholas; Roedenbeck, Christian; Warneke, Thorsten; Notholt, Justus; Baker, Ian; Berry, Joe; Suntharalingam, Parvadha; Jones, Nicholas; Mahieu, Emmanuel; Lejeune, Bernard; Hannigan, James; Conway, Stephanie; Strong, Kimberly; Campbell, Elliott; Wolf, Adam; Kremser, Stefanie

    2016-04-01

    Carbonyl sulfide (OCS) is taken up by plants during photosynthesis through a similar pathway as carbon dioxide (CO2), but is not emitted by respiration, and thus holds great promise as an additional constraint on the carbon cycle. It might act as a sort of tracer of photosynthesis, a way to separate gross primary productivity (GPP) from the net ecosystem exchange (NEE) that is typically derived from flux modeling. However the estimates of OCS sources and sinks still have significant uncertainties, which make it difficult to use OCS as a photosynthetic tracer, and the existing long-term surface-based measurements are sparse. The NDACC-IRWG measures the absorption of OCS in the atmosphere, and provides a potential long-term database of OCS total/partial columns, which can be used to evaluate OCS fluxes. We have retrieved OCS columns from several NDACC sites around the globe, and compared them to model simulation with OCS land fluxes based on the simple biosphere model (SiB). The disagreement between the measurements and the forward simulations indicates that (1) the OCS land fluxes from SiB are too low in the northern boreal region; (2) the ocean fluxes need to be optimized. A statistical linear flux model describing OCS is developed in the TM3 inversion system, and is used to estimate the OCS fluxes. We performed flux inversions using only NOAA OCS surface measurements as an observational constraint and with both surface and NDACC OCS column measurements, and assessed the differences. The posterior uncertainties of the inverted OCS fluxes decreased with the inclusion of NDACC data comparing to those using surface data only, and could be further reduced if more NDACC sites were included.

  11. Arm-associated measurements as estimates of true height in black ...

    African Journals Online (AJOL)

    arm-associated measurements to true height included that of the World Health ... Conclusion: Findings indicate the need for gender and race-specific height estimation ..... New. York, NY: Springer; 2012. 12. Golshan M, Amra B, Hoghoghi MA.

  12. Relationship between parental estimate and an objective measure of child television watching

    OpenAIRE

    Roemmich James N; Fuerch Janene H; Winiewicz Dana D; Robinson Jodie L; Epstein Leonard H

    2006-01-01

    Abstract Many young children have televisions in their bedrooms, which may influence the relationship between parental estimate and objective measures of child television usage/week. Parental estimates of child television time of eighty 4–7 year old children (6.0 ± 1.2 years) at the 75th BMI percentile or greater (90.8 ± 6.8 BMI percentile) were compared to an objective measure of television time obtained from TV Allowance™ devices attached to every television in the home over a three week pe...

  13. Ocean subsurface particulate backscatter estimation from CALIPSO spaceborne lidar measurements

    Science.gov (United States)

    Chen, Peng; Pan, Delu; Wang, Tianyu; Mao, Zhihua

    2017-10-01

    A method for ocean subsurface particulate backscatter estimation from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite was demonstrated. The effects of the CALIOP receiver's transient response on the attenuated backscatter profile were first removed. The two-way transmittance of the overlying atmosphere was then estimated as the ratio of the measured ocean surface attenuated backscatter to the theoretical value computed from wind driven wave slope variance. Finally, particulate backscatter was estimated from the depolarization ratio as the ratio of the column-integrated cross-polarized and co-polarized channels. Statistical results show that the derived particulate backscatter by the method based on CALIOP data agree reasonably well with chlorophyll-a concentration using MODIS data. It indicates a potential use of space-borne lidar to estimate global primary productivity and particulate carbon stock.

  14. Quantitative estimation of defects from measurement obtained by remote field eddy current inspection

    International Nuclear Information System (INIS)

    Davoust, M.E.; Fleury, G.

    1999-01-01

    Remote field eddy current technique is used for dimensioning grooves that may occurs in ferromagnetic pipes. This paper proposes a method to estimate the depth and the length of corrosion grooves from measurement of a pick-up coil signal phase at different positions close to the defect. Grooves dimensioning needs the knowledge of the physical relation between measurements and defect dimensions. So, finite element calculations are performed to obtain a parametric algebraic function of the physical phenomena. By means of this model and a previously defined general approach, an estimate of groove size may be given. In this approach, algebraic function parameters and groove dimensions are linked through a polynomial function. In order to validate this estimation procedure, a statistical study has been performed. The approach is proved to be suitable for real measurements. (authors)

  15. Compensating for evanescent modes and estimating characteristic impedance in waveguide acoustic impedance measurements

    DEFF Research Database (Denmark)

    Nørgaard, Kren Rahbek; Fernandez Grande, Efren

    2017-01-01

    The ear-canal acoustic impedance and reflectance are useful for assessing conductive hearing disorders and calibrating stimulus levels in situ. However, such probe-based measurements are affected by errors due to the presence of evanescent modes and incorrect estimates or assumptions regarding...... characteristic impedance. This paper proposes a method to compensate for evanescent modes in measurements of acoustic impedance, reflectance, and sound pressure in waveguides, as well as estimating the characteristic impedance immediately in front of the probe. This is achieved by adjusting the characteristic...... impedance and subtracting an acoustic inertance from the measured impedance such that the non-causality in the reflectance is minimized in the frequency domain using the Hilbert transform. The method is thus capable of estimating plane-wave quantities of the sought-for parameters by supplying only...

  16. State of charge estimation for lithium-ion pouch batteries based on stress measurement

    International Nuclear Information System (INIS)

    Dai, Haifeng; Yu, Chenchen; Wei, Xuezhe; Sun, Zechang

    2017-01-01

    State of charge (SOC) estimation is one of the important tasks of battery management system (BMS). Being different from other researches, a novel method of SOC estimation for pouch lithium-ion battery cells based on stress measurement is proposed. With a comprehensive experimental study, we find that, the stress of the battery during charge/discharge is composed of the static stress and the dynamic stress. The static stress, which is the measured stress in equilibrium state, corresponds to SOC, this phenomenon facilitates the design of our stress-based SOC estimation. The dynamic stress, on the other hand, is influenced by multiple factors including charge accumulation or depletion, current and historical operation, thus a multiple regression model of the dynamic stress is established. Based on the relationship between static stress and SOC, as well as the dynamic stress modeling, the SOC estimation method is founded. Experimental results show that the stress-based method performs well with a good accuracy, and this method offers a novel perspective for SOC estimation. - Highlights: • A State of Charge estimator based on stress measurement is proposed. • The stress during charge and discharge is investigated with comprehensive experiments. • Effects of SOC, current, and operation history on battery stress are well studied. • A multiple regression model of the dynamic stress is established.

  17. A systematic review of the extent and measurement of healthcare provider racism.

    Science.gov (United States)

    Paradies, Yin; Truong, Mandy; Priest, Naomi

    2014-02-01

    Although considered a key driver of racial disparities in healthcare, relatively little is known about the extent of interpersonal racism perpetrated by healthcare providers, nor is there a good understanding of how best to measure such racism. This paper reviews worldwide evidence (from 1995 onwards) for racism among healthcare providers; as well as comparing existing measurement approaches to emerging best practice, it focuses on the assessment of interpersonal racism, rather than internalized or systemic/institutional racism. The following databases and electronic journal collections were searched for articles published between 1995 and 2012: Medline, CINAHL, PsycInfo, Sociological Abstracts. Included studies were published empirical studies of any design measuring and/or reporting on healthcare provider racism in the English language. Data on study design and objectives; method of measurement, constructs measured, type of tool; study population and healthcare setting; country and language of study; and study outcomes were extracted from each study. The 37 studies included in this review were almost solely conducted in the U.S. and with physicians. Statistically significant evidence of racist beliefs, emotions or practices among healthcare providers in relation to minority groups was evident in 26 of these studies. Although a number of measurement approaches were utilized, a limited range of constructs was assessed. Despite burgeoning interest in racism as a contributor to racial disparities in healthcare, we still know little about the extent of healthcare provider racism or how best to measure it. Studies using more sophisticated approaches to assess healthcare provider racism are required to inform interventions aimed at reducing racial disparities in health.

  18. Peak Measurement for Vancomycin AUC Estimation in Obese Adults Improves Precision and Lowers Bias.

    Science.gov (United States)

    Pai, Manjunath P; Hong, Joseph; Krop, Lynne

    2017-04-01

    Vancomycin area under the curve (AUC) estimates may be skewed in obese adults due to weight-dependent pharmacokinetic parameters. We demonstrate that peak and trough measurements reduce bias and improve the precision of vancomycin AUC estimates in obese adults ( n = 75) and validate this in an independent cohort ( n = 31). The precision and mean percent bias of Bayesian vancomycin AUC estimates are comparable between covariate-dependent ( R 2 = 0.774, 3.55%) and covariate-independent ( R 2 = 0.804, 3.28%) models when peaks and troughs are measured but not when measurements are restricted to troughs only ( R 2 = 0.557, 15.5%). Copyright © 2017 American Society for Microbiology.

  19. Measuring Critical Care Providers' Attitudes About Controlled Donation After Circulatory Death.

    Science.gov (United States)

    Rodrigue, James R; Luskin, Richard; Nelson, Helen; Glazier, Alexandra; Henderson, Galen V; Delmonico, Francis L

    2018-06-01

    Unfavorable attitudes and insufficient knowledge about donation after cardiac death among critical care providers can have important consequences for the appropriate identification of potential donors, consistent implementation of donation after cardiac death policies, and relative strength of support for this type of donation. The lack of reliable and valid assessment measures has hampered research to capture providers' attitudes. Design and Research Aims: Using stakeholder engagement and an iterative process, we developed a questionnaire to measure attitudes of donation after cardiac death in critical care providers (n = 112) and examined its psychometric properties. Exploratory factor analysis, internal consistency, and validity analyses were conducted to examine the measure. A 34-item questionnaire consisting of 4 factors (Personal Comfort, Process Satisfaction, Family Comfort, and System Trust) provided the most parsimonious fit. Internal consistency was acceptable for each of the subscales and the total questionnaire (Cronbach α > .70). A strong association between more favorable attitudes overall and knowledge ( r = .43, P donation after cardiac death ( P donation after cardiac death.

  20. EMS Provider Assessment of Vehicle Damage Compared to a Professional Crash Reconstructionist

    Science.gov (United States)

    Lerner, E. Brooke; Cushman, Jeremy T.; Blatt, Alan; Lawrence, Richard; Shah, Manish N.; Swor, Robert; Brasel, Karen; Jurkovich, Gregory J.

    2011-01-01

    Objective To determine the accuracy of EMS provider assessments of motor vehicle damage, when compared to measurements made by a professional crash reconstructionist. Methods EMS providers caring for adult patients injured during a motor vehicle crash and transported to the regional trauma center in a midsized community were interviewed upon ED arrival. The interview collected provider estimates of crash mechanism of injury. For crashes that met a preset severity threshold, the vehicle’s owner was asked to consent to having a crash reconstructionist assess their vehicle. The assessment included measuring intrusion and external auto deformity. Vehicle damage was used to calculate change in velocity. Paired t-test and correlation were used to compare EMS estimates and investigator derived values. Results 91 vehicles were enrolled; of these 58 were inspected and 33 were excluded because the vehicle was not accessible. 6 vehicles had multiple patients. Therefore, a total of 68 EMS estimates were compared to the inspection findings. Patients were 46% male, 28% admitted to hospital, and 1% died. Mean EMS estimated deformity was 18” and mean measured was 14”. Mean EMS estimated intrusion was 5” and mean measured was 4”. EMS providers and the reconstructionist had 67% agreement for determination of external auto deformity (kappa 0.26), and 88% agreement for determination of intrusion (kappa 0.27) when the 1999 Field Triage Decision Scheme Criteria were applied. Mean EMS estimated speed prior to the crash was 48 mph±13 and mean reconstructionist estimated change in velocity was 18 mph±12 (correlation -0.45). EMS determined that 19 vehicles had rolled over while the investigator identified 18 (kappa 0.96). In 55 cases EMS and the investigator agreed on seatbelt use, for the remaining 13 cases there was disagreement (5) or the investigator was unable to make a determination (8) (kappa 0.40). Conclusions This study found that EMS providers are good at estimating

  1. Using linear time-invariant system theory to estimate kinetic parameters directly from projection measurements

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1995-01-01

    It is common practice to estimate kinetic parameters from dynamically acquired tomographic data by first reconstructing a dynamic sequence of three-dimensional reconstructions and then fitting the parameters to time activity curves generated from the time-varying reconstructed images. However, in SPECT, the pharmaceutical distribution can change during the acquisition of a complete tomographic data set, which can bias the estimated kinetic parameters. It is hypothesized that more accurate estimates of the kinetic parameters can be obtained by fitting to the projection measurements instead of the reconstructed time sequence. Estimation from projections requires the knowledge of their relationship between the tissue regions of interest or voxels with particular kinetic parameters and the project measurements, which results in a complicated nonlinear estimation problem with a series of exponential factors with multiplicative coefficients. A technique is presented in this paper where the exponential decay parameters are estimated separately using linear time-invariant system theory. Once the exponential factors are known, the coefficients of the exponentials can be estimated using linear estimation techniques. Computer simulations demonstrate that estimation of the kinetic parameters directly from the projections is more accurate than the estimation from the reconstructed images

  2. Measurement of the momentum transferred between contacting bodies during the LISA test-mass release phase—uncertainty estimation

    International Nuclear Information System (INIS)

    De Cecco, M; Bortoluzzi, D; Da Lio, M; Baglivo, L; Benedetti, M

    2009-01-01

    The requirements for the Laser Interferometer Space Antenna (LISA) test-mass (TM) release phase are analysed in view of the building up of a testing facility aimed at on-Earth qualification of the release mechanism. Accordingly, the release of the TM to free-fall must provide a linear momentum transferred to the TM not exceeding 10 −5 kg m s −1 . In order to test this requirement, a double pendulum system has been developed. The mock-ups of the TM and the release-dedicated plunger are brought into contact and then the latter is quickly retracted. During and after release, the TM motion is measured by a laser interferometer. The transferred momentum is estimated from the free oscillations following the plunger retraction by means of a Wiener–Kolmogorov optimal filter. This work is aimed at modelling the measurement chain, taking into account procedure, instruments, mechanisms and data elaboration in order to estimate the uncertainty associated with the transferred momentum measurement by means of Monte Carlo simulation

  3. Comparison of two different methods for the uncertainty estimation of circle diameter measurements using an optical coordinate measuring machine

    DEFF Research Database (Denmark)

    Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2005-01-01

    This paper deals with the uncertainty estimation of measurements performed on optical coordinate measuring machines (CMMs). Two different methods were used to assess the uncertainty of circle diameter measurements using an optical CMM: the sensitivity analysis developing an uncertainty budget...

  4. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  5. Stroke Volume estimation using aortic pressure measurements and aortic cross sectional area: Proof of concept.

    Science.gov (United States)

    Kamoi, S; Pretty, C G; Chiew, Y S; Pironet, A; Davidson, S; Desaive, T; Shaw, G M; Chase, J G

    2015-08-01

    Accurate Stroke Volume (SV) monitoring is essential for patient with cardiovascular dysfunction patients. However, direct SV measurements are not clinically feasible due to the highly invasive nature of measurement devices. Current devices for indirect monitoring of SV are shown to be inaccurate during sudden hemodynamic changes. This paper presents a novel SV estimation using readily available aortic pressure measurements and aortic cross sectional area, using data from a porcine experiment where medical interventions such as fluid replacement, dobutamine infusions, and recruitment maneuvers induced SV changes in a pig with circulatory shock. Measurement of left ventricular volume, proximal aortic pressure, and descending aortic pressure waveforms were made simultaneously during the experiment. From measured data, proximal aortic pressure was separated into reservoir and excess pressures. Beat-to-beat aortic characteristic impedance values were calculated using both aortic pressure measurements and an estimate of the aortic cross sectional area. SV was estimated using the calculated aortic characteristic impedance and excess component of the proximal aorta. The median difference between directly measured SV and estimated SV was -1.4ml with 95% limit of agreement +/- 6.6ml. This method demonstrates that SV can be accurately captured beat-to-beat during sudden changes in hemodynamic state. This novel SV estimation could enable improved cardiac and circulatory treatment in the critical care environment by titrating treatment to the effect on SV.

  6. An approach for estimating measurement uncertainty in medical laboratories using data from long-term quality control and external quality assessment schemes.

    Science.gov (United States)

    Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario

    2017-10-26

    The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.

  7. A Model of Gravity Vector Measurement Noise for Estimating Accelerometer Bias in Gravity Disturbance Compensation.

    Science.gov (United States)

    Tie, Junbo; Cao, Juliang; Chang, Lubing; Cai, Shaokun; Wu, Meiping; Lian, Junxiang

    2018-03-16

    Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method.

  8. A Model of Gravity Vector Measurement Noise for Estimating Accelerometer Bias in Gravity Disturbance Compensation

    Science.gov (United States)

    Cao, Juliang; Cai, Shaokun; Wu, Meiping; Lian, Junxiang

    2018-01-01

    Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method. PMID:29547552

  9. Standard error of measurement of five health utility indexes across the range of health for use in estimating reliability and responsiveness

    Science.gov (United States)

    Palta, Mari; Chen, Han-Yang; Kaplan, Robert M.; Feeny, David; Cherepanov, Dasha; Fryback, Dennis

    2011-01-01

    Background Standard errors of measurement (SEMs) of health related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics and provides guidance on using indexes on the individual and group level. SEM is also a component of reliability. Purpose To estimate SEM of five HRQoL indexes. Design The National Health Measurement Study (NHMS) was a population based telephone survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures 1 and 6 months post cataract surgery. Subjects 3844 randomly selected adults from the non-institutionalized population 35 to 89 years old in the contiguous United States and 265 cataract patients. Measurements The SF6-36v2™, QWB-SA, EQ-5D, HUI2 and HUI3 were included. An item-response theory (IRT) approach captured joint variation in indexes into a composite construct of health (theta). We estimated: (1) the test-retest standard deviation (SEM-TR) from COMHS, (2) the structural standard deviation (SEM-S) around the composite construct from NHMS and (3) corresponding reliability coefficients. Results SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2) and 0.134 (HUI3), while SEM-S was 0.071, 0.094, 0.084, 0.074 and 0.117, respectively. These translate into reliability coefficients for SF-6D: 0.66 (COMHS) and 0.71 (NHMS), for QWB: 0.59 and 0.64, for EQ-5D: 0.61 and 0.70 for HUI2: 0.64 and 0.80, and for HUI3: 0.75 and 0.77, respectively. The SEM varied considerably across levels of health, especially for HUI2, HUI3 and EQ-5D, and was strongly influenced by ceiling effects. Limitations Repeated measures were five months apart and estimated theta contain measurement error. Conclusions The two types of SEM are similar and substantial for all the indexes, and vary across the range of health. PMID:20935280

  10. Comparison of Broselow tape measurements versus mother estimations of pediatric weights

    Directory of Open Access Journals (Sweden)

    Sherafat Akaberian

    2013-06-01

    Full Text Available Background Pediatric resuscitation is challenging for therapeutic group. The most physicians have limited experience in dealing with this situation. Appropriate dosing of the drugs depends on the body weight of the children that it is usually not feasible. There is need for a fast, convenient and reliable method for body weight estimation in children. The aim of this study was to assess the accuracy of Broselow tape in children of Bushehr city. Material and Methods: This cross-sectional study was conducted in the emergency department of Aliasghar hospital. Children were between 1 month and 14 years. Children with chronic disease, 334, ill children were excluded from study. Estimated weight measured based on Broselow tape and actual weight measured by digital scale, then estimated and actual weight were compared. The results were analyzed by SPSS Software Ver 18 and T-Test, Chi-Square Test. Results: findings showed that 43.2% of total subjects were female Mean of age were 43 months. 72.5% of tape body weights were within  10% error of actual body weights. 78.9% of tape body weight was within  15% error of actual body weights. There was no significant difference between boys and girls. Conclusion: Broslow tape was easy, fast and exact for body weight estimation in emergency situation .it is more exact of body weight estimation by parents or therapeutic group so it helps therapeutic group in emergency department for accounting of medication dosage and equipment sizes.

  11. Estimating retained gas volumes in the Hanford tanks using waste level measurements

    International Nuclear Information System (INIS)

    Whitney, P.D.; Chen, G.; Gauglitz, P.A.; Meyer, P.A.; Miller, N.E.

    1997-09-01

    The Hanford site is home to 177 large, underground nuclear waste storage tanks. Safety and environmental concerns surround these tanks and their contents. One such concern is the propensity for the waste in these tanks to generate and trap flammable gases. This report focuses on understanding and improving the quality of retained gas volume estimates derived from tank waste level measurements. While direct measurements of gas volume are available for a small number of the Hanford tanks, the increasingly wide availability of tank waste level measurements provides an opportunity for less expensive (than direct gas volume measurement) assessment of gas hazard for the Hanford tanks. Retained gas in the tank waste is inferred from level measurements -- either long-term increase in the tank waste level, or fluctuations in tank waste level with atmospheric pressure changes. This report concentrates on the latter phenomena. As atmospheric pressure increases, the pressure on the gas in the tank waste increases, resulting in a level decrease (as long as the tank waste is open-quotes softclose quotes enough). Tanks with waste levels exhibiting fluctuations inversely correlated with atmospheric pressure fluctuations were catalogued in an earlier study. Additionally, models incorporating ideal-gas law behavior and waste material properties have been proposed. These models explicitly relate the retained gas volume in the tank with the magnitude of the waste level fluctuations, dL/dP. This report describes how these models compare with the tank waste level measurements

  12. Drag and Lift Estimation from 3-D Velocity Field Data Measured by Multi-Plane Stereo PIV

    OpenAIRE

    加藤, 裕之; 松島, 紀佐; 上野, 真; 小池, 俊輔; 渡辺, 重哉; Kato, Hiroyuki; Matsushima, Kisa; Ueno, Makoto; Koike, Shunsuke; Watanabe, Shigeya

    2013-01-01

    For airplane design, it is crucial to have tools that can accurately predict airplane drag and lift. Usually drag and lift prediction methods are force measurement using wind tunnel balance. Unfortunately, balance data do not provide information contribution of airplane to components to drag and lift for more precise and competitive airplane design. To obtain such information, a wake integration method for use drag and lift estimation was developed for use in wake survey data analysis. Wake s...

  13. The provider perception inventory: psychometrics of a scale designed to measure provider stigma about HIV, substance abuse, and MSM behavior.

    Science.gov (United States)

    Windsor, Liliane C; Benoit, Ellen; Ream, Geoffrey L; Forenza, Brad

    2013-01-01

    Nongay identified men who have sex with men and women (NGI MSMW) and who use alcohol and other drugs are a vulnerable, understudied, and undertreated population. Little is known about the stigma faced by this population or about the way that health service providers view and serve these stigmatized clients. The provider perception inventory (PPI) is a 39-item scale that measures health services providers' stigma about HIV/AIDS, substance use, and MSM behavior. The PPI is unique in that it was developed to include service provider stigma targeted at NGI MSMW individuals. PPI was developed through a mixed methods approach. Items were developed based on existing measures and findings from focus groups with 18 HIV and substance abuse treatment providers. Exploratory factor analysis using data from 212 health service providers yielded a two dimensional scale: (1) individual attitudes (19 items) and (2) agency environment (11 items). Structural equation modeling analysis supported the scale's predictive validity (N=190 sufficiently complete cases). Overall findings indicate initial support for the psychometrics of the PPI as a measure of service provider stigma pertaining to the intersection of HIV/AIDS, substance use, and MSM behavior. Limitations and implications to future research are discussed.

  14. Estimation of excitation forces for wave energy converters control using pressure measurements

    Science.gov (United States)

    Abdelkhalik, O.; Zou, S.; Robinett, R.; Bacelli, G.; Wilson, D.

    2017-08-01

    Most control algorithms of wave energy converters require prediction of wave elevation or excitation force for a short future horizon, to compute the control in an optimal sense. This paper presents an approach that requires the estimation of the excitation force and its derivatives at present time with no need for prediction. An extended Kalman filter is implemented to estimate the excitation force. The measurements in this approach are selected to be the pressures at discrete points on the buoy surface, in addition to the buoy heave position. The pressures on the buoy surface are more directly related to the excitation force on the buoy as opposed to wave elevation in front of the buoy. These pressure measurements are also more accurate and easier to obtain. A singular arc control is implemented to compute the steady-state control using the estimated excitation force. The estimated excitation force is expressed in the Laplace domain and substituted in the control, before the latter is transformed to the time domain. Numerical simulations are presented for a Bretschneider wave case study.

  15. Analytical estimation of control rod shadowing effect for excess reactivity measurement of HTTR

    International Nuclear Information System (INIS)

    Nakano, Masaaki; Fujimoto, Nozomu; Yamashita, Kiyonobu

    1999-01-01

    The fuel addition method is generally used for the excess reactivity measurement of the initial core. The control rod shadowing effect for the excess reactivity measurement has been estimated analytically for High Temperature Engineering Test Reactor (HTTR). 3-dimensional whole core analyses were carried out. The movements of control rods in measurements were simulated in the calculation. It was made clear that the value of excess reactivity strongly depend on combinations of measuring control rods and compensating control rods. The differences in excess reactivity between combinations come from the control rod shadowing effect. The shadowing effect is reduced by the use of plural number of measuring and compensating control rods to prevent deep insertion of them into the core. The measured excess reactivity in the experiments is, however, smaller than the estimated value with shadowing effect. (author)

  16. Measuring, calculating and estimating PEP's parasitic mode loss parameters

    International Nuclear Information System (INIS)

    Weaver, J.N.

    1981-01-01

    This note discusses various ways the parasitic mode losses from a bunched beam to a vacuum chamber can be measured, calculated or estimated. A listing of the parameter, k, for the various PEP ring components is included. A number of formulas for calculating multiple and single pass losses are discussed and evaluated for several cases. 25 refs., 1 fig., 1 tab

  17. A comparison of two measures of HIV diversity in multi-assay algorithms for HIV incidence estimation.

    Directory of Open Access Journals (Sweden)

    Matthew M Cousins

    Full Text Available Multi-assay algorithms (MAAs can be used to estimate HIV incidence in cross-sectional surveys. We compared the performance of two MAAs that use HIV diversity as one of four biomarkers for analysis of HIV incidence.Both MAAs included two serologic assays (LAg-Avidity assay and BioRad-Avidity assay, HIV viral load, and an HIV diversity assay. HIV diversity was quantified using either a high resolution melting (HRM diversity assay that does not require HIV sequencing (HRM score for a 239 base pair env region or sequence ambiguity (the percentage of ambiguous bases in a 1,302 base pair pol region. Samples were classified as MAA positive (likely from individuals with recent HIV infection if they met the criteria for all of the assays in the MAA. The following performance characteristics were assessed: (1 the proportion of samples classified as MAA positive as a function of duration of infection, (2 the mean window period, (3 the shadow (the time period before sample collection that is being assessed by the MAA, and (4 the accuracy of cross-sectional incidence estimates for three cohort studies.The proportion of samples classified as MAA positive as a function of duration of infection was nearly identical for the two MAAs. The mean window period was 141 days for the HRM-based MAA and 131 days for the sequence ambiguity-based MAA. The shadows for both MAAs were <1 year. Both MAAs provided cross-sectional HIV incidence estimates that were very similar to longitudinal incidence estimates based on HIV seroconversion.MAAs that include the LAg-Avidity assay, the BioRad-Avidity assay, HIV viral load, and HIV diversity can provide accurate HIV incidence estimates. Sequence ambiguity measures obtained using a commercially-available HIV genotyping system can be used as an alternative to HRM scores in MAAs for cross-sectional HIV incidence estimation.

  18. The estimation of effective doses using measurement of several relevant physical parameters from radon exposures

    International Nuclear Information System (INIS)

    Ridzikova, A; Fronka, A.; Maly, B.; Moucka, L.

    2003-01-01

    In the present investigation, we will be study the dose relevant factors from continual monitoring in real homes into account getting more accurate estimation of 222 Rn the effective dose. The dose relevant parameters include the radon concentration, the equilibrium factor (f), the fraction (fp) of unattached radon decay products and real time occupancy people in home. The result of the measurement are the time courses of radon concentration that are based on estimation effective doses together with assessment of the real time occupancy people indoor. We found out by analysis that year effective dose is lower than effective dose estimated by ICRP recommendation from the integral measurement that included only average radon concentration. Our analysis of estimation effective doses using measurement of several physical parameters was made only in one case and for the better specification is important to measure in different real occupancy houses. (authors)

  19. Using plot experiments to test the validity of mass balance models employed to estimate soil redistribution rates from 137Cs and 210Pbex measurements

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.

    2012-01-01

    Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ( 137 Cs) and excess lead-210 ( 210 Pb ex ) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from 137 Cs and 210 Pb ex measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. - Highlights: ► Soil erosion is an important threat to the long-term sustainability of agriculture.

  20. Estimates of Shear Stress and Measurements of Water Levels in the Lower Fox River near Green Bay, Wisconsin

    Science.gov (United States)

    Westenbroek, Stephen M.

    2006-01-01

    Turbulent shear stress in the boundary layer of a natural river system largely controls the deposition and resuspension of sediment, as well as the longevity and effectiveness of granular-material caps used to cover and isolate contaminated sediments. This report documents measurements and calculations made in order to estimate shear stress and shear velocity on the Lower Fox River, Wisconsin. Velocity profiles were generated using an acoustic Doppler current profiler (ADCP) mounted on a moored vessel. This method of data collection yielded 158 velocity profiles on the Lower Fox River between June 2003 and November 2004. Of these profiles, 109 were classified as valid and were used to estimate the bottom shear stress and velocity using log-profile and turbulent kinetic energy methods. Estimated shear stress ranged from 0.09 to 10.8 dynes per centimeter squared. Estimated coefficients of friction ranged from 0.001 to 0.025. This report describes both the field and data-analysis methods used to estimate shear-stress parameters for the Lower Fox River. Summaries of the estimated values for bottom shear stress, shear velocity, and coefficient of friction are presented. Confidence intervals about the shear-stress estimates are provided.

  1. Using Indirect Turbulence Measurements for Real-Time Parameter Estimation in Turbulent Air

    Science.gov (United States)

    Martos, Borja; Morelli, Eugene A.

    2012-01-01

    The use of indirect turbulence measurements for real-time estimation of parameters in a linear longitudinal dynamics model in atmospheric turbulence was studied. It is shown that measuring the atmospheric turbulence makes it possible to treat the turbulence as a measured explanatory variable in the parameter estimation problem. Commercial off-the-shelf sensors were researched and evaluated, then compared to air data booms. Sources of colored noise in the explanatory variables resulting from typical turbulence measurement techniques were identified and studied. A major source of colored noise in the explanatory variables was identified as frequency dependent upwash and time delay. The resulting upwash and time delay corrections were analyzed and compared to previous time shift dynamic modeling research. Simulation data as well as flight test data in atmospheric turbulence were used to verify the time delay behavior. Recommendations are given for follow on flight research and instrumentation.

  2. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    Science.gov (United States)

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Center of mass movement estimation using an ambulatory measurement sytem

    NARCIS (Netherlands)

    Schepers, H. Martin; Veltink, Petrus H.

    2007-01-01

    Center of Mass (CoM) displacement, an important variable to characterize human walking, was estimated in this study using an ambulatory measurement system. The ambulatory system was compared to an optical reference system. Root-mean-square differences between the magnitudes of the CoM appeared to be

  4. Estimation of Apollo Lunar Dust Transport using Optical Extinction Measurements

    Science.gov (United States)

    Lane, John E.; Metzger, Philip T.

    2015-04-01

    A technique to estimate mass erosion rate of surface soil during landing of the Apollo Lunar Module (LM) and total mass ejected due to the rocket plume interaction is proposed and tested. The erosion rate is proportional to the product of the second moment of the lofted particle size distribution N(D), and third moment of the normalized soil size distribution S(D), divided by the integral of S(D)ṡD2/v(D), where D is particle diameter and v(D) is the vertical component of particle velocity. The second moment of N(D) is estimated by optical extinction analysis of the Apollo cockpit video. Because of the similarity between mass erosion rate of soil as measured by optical extinction and rainfall rate as measured by radar reflectivity, traditional NWS radar/rainfall correlation methodology can be applied to the lunar soil case where various S(D) models are assumed corresponding to specific lunar sites.

  5. Comparing Evapotranspiration Rates Estimated from Atmospheric Flux and TDR Soil Moisture Measurements

    DEFF Research Database (Denmark)

    Schelde, Kirsten; Ringgaard, Rasmus; Herbst, Mathias

    2011-01-01

    limit estimate (disregarding dew evaporation) of evapotranspiration on dry days. During a period of 7 wk, the two independent measuring techniques were applied in a barley (Hordeum vulgare L.) field, and six dry periods were identified. Measurements of daily root zone soil moisture depletion were...

  6. Estimation of sex from the anthropometric ear measurements of a Sudanese population.

    Science.gov (United States)

    Ahmed, Altayeb Abdalla; Omer, Nosyba

    2015-09-01

    The external ear and its prints have multifaceted roles in medico-legal practice, e.g., identification and facial reconstruction. Furthermore, its norms are essential in the diagnosis of congenital anomalies and the design of hearing aids. Body part dimensions vary in different ethnic groups, so the most accurate statistical estimations of biological attributes are developed using population-specific standards. Sudan lacks comprehensive data about ear norms; moreover, there is a universal rarity in assessing the possibility of sex estimation from ear dimensions using robust statistical techniques. Therefore, this study attempts to establish data for normal adult Sudanese Arabs, assessing the existence of asymmetry and developing a population-specific equation for sex estimation. The study sample comprised 200 healthy Sudanese Arab volunteers (100 males and 100 females) in the age range of 18-30years. The physiognomic ear length and width, lobule length and width, and conchal length and width measurements were obtained by direct anthropometry, using a digital sliding caliper. Moreover, indices and asymmetry were assessed. Data were analyzed using basic descriptive statistics and discriminant function analyses employing jackknife validations of classification results. All linear dimensions used were sexually dimorphic except lobular lengths. Some of the variables and indices show asymmetry. Ear dimensions showed cross-validated sex classification accuracy ranging between 60.5% and 72%. Hence, the ear measurements cannot be used as an effective tool in the estimation of sex. However, in the absence of other more reliable means, it still can be considered a supportive trait in sex estimation. Further, asymmetry should be considered in identification from the ear measurements. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Evaluating uncertainty estimates in hydrologic models: borrowing measures from the forecast verification community

    Directory of Open Access Journals (Sweden)

    K. J. Franz

    2011-11-01

    Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE, and the Shuffle Complex Evolution Metropolis (SCEM. Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

  8. Estimating the measurement uncertainty in forensic blood alcohol analysis.

    Science.gov (United States)

    Gullberg, Rod G

    2012-04-01

    For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.

  9. Water storage change estimation from in situ shrinkage measurements of clay soils

    NARCIS (Netherlands)

    Brake, te B.; Ploeg, van der M.J.; Rooij, de G.H.

    2013-01-01

    The objective of this study is to assess the applicability of clay soil elevation change measurements to estimate soil water storage changes, using a simplified approach. We measured moisture contents in aggregates by EC-5 sensors, and in multiple aggregate and inter-aggregate spaces (bulk soil) by

  10. Measurement and estimation of maximum skin dose to the patient for different interventional procedures

    International Nuclear Information System (INIS)

    Cheng Yuxi; Liu Lantao; Wei Kedao; Yu Peng; Yan Shulin; Li Tianchang

    2005-01-01

    Objective: To determine the dose distribution and maximum skin dose to the patient for four interventional procedures: coronary angiography (CA), hepatic angiography (HA), radiofrequency ablation (RF) and cerebral angiography (CAG), and to estimate the definitive effect of radiation on skin. Methods: Skin dose was measured using LiF: Mg, Cu, P TLD chips. A total of 9 measuring points were chosen on the back of the patient with two TLDs placed at each point, for CA, HA and RF interventional procedures, whereas two TLDs were placed on one point each at the postero-anterior (PA) and lateral side (LAT) respectively, during the CAG procedure. Results: The results revealed that the maximum skin dose to the patient was 1683.91 mGy for the HA procedure with a mean value of 607.29 mGy. The maximum skin dose at the PA point was 959.3 mGy for the CAG with a mean value of 418.79 mGy; While the maximum and the mean doses at the LAT point were 704 mGy and 191.52 mGy, respectively. For the RF procedure the maximum dose was 853.82 mGy and the mean was 219.67 mGy. For the CA procedure the maximum dose was 456.1 mGy and the mean was 227.63 mGy. Conclusion: All the measured dose values in this study are estimated ones which could not provide the accurate maximum value because it is difficult to measure using a great deal of TLDs. On the other hand, the small area of skin exposed to high dose could be missed as the distribution of the dose is successive. (authors)

  11. Can administrative health utilisation data provide an accurate diabetes prevalence estimate for a geographical region?

    Science.gov (United States)

    Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin

    2018-05-01

    To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Relationship between parental estimate and an objective measure of child television watching

    Directory of Open Access Journals (Sweden)

    Roemmich James N

    2006-11-01

    Full Text Available Abstract Many young children have televisions in their bedrooms, which may influence the relationship between parental estimate and objective measures of child television usage/week. Parental estimates of child television time of eighty 4–7 year old children (6.0 ± 1.2 years at the 75th BMI percentile or greater (90.8 ± 6.8 BMI percentile were compared to an objective measure of television time obtained from TV Allowance™ devices attached to every television in the home over a three week period. Results showed that parents overestimate their child's television time compared to an objective measure when no television is present in the bedroom by 4 hours/week (25.4 ± 11.5 vs. 21.4 ± 9.1 in comparison to underestimating television time by over 3 hours/week (26.5 ± 17.2 vs. 29.8 ± 14.4 when the child has a television in their bedroom (p = 0.02. Children with a television in their bedroom spend more objectively measured hours in television time than children without a television in their bedroom (29.8 ± 14.2 versus 21.4 ± 9.1, p = 0.003. Research on child television watching should take into account television watching in bedrooms, since it may not be adequately assessed by parental estimates.

  13. Using marginal structural measurement-error models to estimate the long-term effect of antiretroviral therapy on incident AIDS or death.

    Science.gov (United States)

    Cole, Stephen R; Jacobson, Lisa P; Tien, Phyllis C; Kingsley, Lawrence; Chmiel, Joan S; Anastos, Kathryn

    2010-01-01

    To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus-positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding.

  14. Estimating Contact Exposure in Football Using the Head Impact Exposure Estimate

    OpenAIRE

    Kerr, Zachary Y.; Littleton, Ashley C.; Cox, Leah M.; DeFreese, J.D.; Varangis, Eleanna; Lynall, Robert C.; Schmidt, Julianne D.; Marshall, Stephen W.; Guskiewicz, Kevin M.

    2015-01-01

    Over the past decade, there has been significant debate regarding the effect of cumulative subconcussive head impacts on short and long-term neurological impairment. This debate remains unresolved, because valid epidemiological estimates of athletes' total contact exposure are lacking. We present a measure to estimate the total hours of contact exposure in football over the majority of an athlete's lifespan. Through a structured oral interview, former football players provided information rel...

  15. Global Precipitation Measurement (GPM) Core Observatory Falling Snow Estimates

    Science.gov (United States)

    Skofronick Jackson, G.; Kulie, M.; Milani, L.; Munchak, S. J.; Wood, N.; Levizzani, V.

    2017-12-01

    Retrievals of falling snow from space represent an important data set for understanding and linking the Earth's atmospheric, hydrological, and energy cycles. Estimates of falling snow must be captured to obtain the true global precipitation water cycle, snowfall accumulations are required for hydrological studies, and without knowledge of the frozen particles in clouds one cannot adequately understand the energy and radiation budgets. This work focuses on comparing the first stable falling snow retrieval products (released May 2017) for the Global Precipitation Measurement (GPM) Core Observatory (GPM-CO), which was launched February 2014, and carries both an active dual frequency (Ku- and Ka-band) precipitation radar (DPR) and a passive microwave radiometer (GPM Microwave Imager-GMI). Five separate GPM-CO falling snow retrieval algorithm products are analyzed including those from DPR Matched (Ka+Ku) Scan, DPR Normal Scan (Ku), DPR High Sensitivity Scan (Ka), combined DPR+GMI, and GMI. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new, the different on-orbit instruments don't capture all snow rates equally, and retrieval algorithms differ. Thus a detailed comparison among the GPM-CO products elucidates advantages and disadvantages of the retrievals. GPM and CloudSat global snowfall evaluation exercises are natural investigative pathways to explore, but caution must be undertaken when analyzing these datasets for comparative purposes. This work includes outlining the challenges associated with comparing GPM-CO to CloudSat satellite snow estimates due to the different sampling, algorithms, and instrument capabilities. We will highlight some factors and assumptions that can be altered or statistically normalized and applied in an effort to make comparisons between GPM and CloudSat global satellite falling snow products as equitable as possible.

  16. Measuring Cross-Section and Estimating Uncertainties with the fissionTPC

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manning, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sangiorgio, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seilhan, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-30

    The purpose of this document is to outline the prescription for measuring fission cross-sections with the NIFFTE fissionTPC and estimating the associated uncertainties. As such it will serve as a work planning guide for NIFFTE collaboration members and facilitate clear communication of the procedures used to the broader community.

  17. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  18. Power system observability and dynamic state estimation for stability monitoring using synchrophasor measurements

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Kai; Qi, Junjian; Kang, Wei

    2016-08-01

    Growing penetration of intermittent resources such as renewable generations increases the risk of instability in a power grid. This paper introduces the concept of observability and its computational algorithms for a power grid monitored by the wide-area measurement system (WAMS) based on synchrophasors, e.g. phasor measurement units (PMUs). The goal is to estimate real-time states of generators, especially for potentially unstable trajectories, the information that is critical for the detection of rotor angle instability of the grid. The paper studies the number and siting of synchrophasors in a power grid so that the state of the system can be accurately estimated in the presence of instability. An unscented Kalman filter (UKF) is adopted as a tool to estimate the dynamic states that are not directly measured by synchrophasors. The theory and its computational algorithms are illustrated in detail by using a 9-bus 3-generator power system model and then tested on a 140-bus 48-generator Northeast Power Coordinating Council power grid model. Case studies on those two systems demonstrate the performance of the proposed approach using a limited number of synchrophasors for dynamic state estimation for stability assessment and its robustness against moderate inaccuracies in model parameters.

  19. A Mixed WLS Power System State Estimation Method Integrating a Wide-Area Measurement System and SCADA Technology

    Directory of Open Access Journals (Sweden)

    Tao Jin

    2018-02-01

    Full Text Available To address the issue that the phasor measurement units (PMUs of wide area measurement system (WAMS are not sufficient for static state estimation in most existing power systems, this paper proposes a mixed power system weighted least squares (WLS state estimation method integrating a wide-area measurement system and supervisory control and data acquisition (SCADA technology. The hybrid calculation model is established by incorporating phasor measurements (including the node voltage phasors and branch current phasors and the results of the traditional state estimator in a post-processing estimator. The performance assessment is discussed through setting up mathematical models of the distribution network. Based on PMU placement optimization and bias analysis, the effectiveness of the proposed method was proved to be accurate and reliable by simulations of different cases. Furthermore, emulating calculation shows this method greatly improves the accuracy and stability of the state estimation solution, compared with the traditional WLS state estimation.

  20. Using "1"3"7Cs measurements to estimate soil erosion rates in the Pčinja and South Morava River Basins, southeastern Serbia

    International Nuclear Information System (INIS)

    Petrović, Jelena; Dragović, Snežana; Dragović, Ranko; Đorđević, Milan; Đokić, Mrđan; Zlatković, Bojan; Walling, Desmond

    2016-01-01

    The need for reliable assessments of soil erosion rates in Serbia has directed attention to the potential for using "1"3"7Cs measurements to derive estimates of soil redistribution rates. Since, to date, this approach has not been applied in southeastern Serbia, a reconnaissance study was undertaken to confirm its viability. The need to take account of the occurrence of substantial Chernobyl fallout was seen as a potential problem. Samples for "1"3"7Cs measurement were collected from a zone of uncultivated soils in the watersheds of Pčinja and South Morava Rivers, an area with known high soil erosion rates. Two theoretical conversion models, the profile distribution (PD) model and diffusion and migration (D&M) model were used to derive estimates of soil erosion and deposition rates from the "1"3"7Cs measurements. The estimates of soil redistribution rates derived by using the PD and D&M models were found to differ substantially and this difference was ascribed to the assumptions of the simpler PD model that cause it to overestimate rates of soil loss. The results provided by the D&M model were judged to more reliable. - Highlights: • The "1"3"7Cs measurements are employed to estimate the soil erosion and deposition rates in southeastern Serbia. • Estimates of annual soil loss by profile distribution (PD) and diffusion and migration (D&M) models differ significantly. • Differences were ascribed to the assumptions of the simpler PD model which cause it to overestimate rates of soil loss. • The study confirmed the potential for using "1"3"7Cs measurements to estimate soil erosion rates in Serbia.

  1. Automated procedure for volumetric measurement of metastases. Estimation of tumor burden

    International Nuclear Information System (INIS)

    Fabel, M.; Bolte, H.

    2008-01-01

    Cancer is a common and increasing disease worldwide. Therapy monitoring in oncologic patient care requires accurate and reliable measurement methods for evaluation of the tumor burden. RECIST (response evaluation criteria in solid tumors) and WHO criteria are still the current standards for therapy response evaluation with inherent disadvantages due to considerable interobserver variation of the manual diameter estimations. Volumetric analysis of e.g. lung, liver and lymph node metastases, promises to be a more accurate, precise and objective method for tumor burden estimation. (orig.) [de

  2. Using (137)Cs measurements to estimate soil erosion rates in the Pčinja and South Morava River Basins, southeastern Serbia.

    Science.gov (United States)

    Petrović, Jelena; Dragović, Snežana; Dragović, Ranko; Đorđević, Milan; Đokić, Mrđan; Zlatković, Bojan; Walling, Desmond

    2016-07-01

    The need for reliable assessments of soil erosion rates in Serbia has directed attention to the potential for using (137)Cs measurements to derive estimates of soil redistribution rates. Since, to date, this approach has not been applied in southeastern Serbia, a reconnaissance study was undertaken to confirm its viability. The need to take account of the occurrence of substantial Chernobyl fallout was seen as a potential problem. Samples for (137)Cs measurement were collected from a zone of uncultivated soils in the watersheds of Pčinja and South Morava Rivers, an area with known high soil erosion rates. Two theoretical conversion models, the profile distribution (PD) model and diffusion and migration (D&M) model were used to derive estimates of soil erosion and deposition rates from the (137)Cs measurements. The estimates of soil redistribution rates derived by using the PD and D&M models were found to differ substantially and this difference was ascribed to the assumptions of the simpler PD model that cause it to overestimate rates of soil loss. The results provided by the D&M model were judged to more reliable. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information

    International Nuclear Information System (INIS)

    Burr, T.; Croft, S.; Krieger, T.; Martin, K.; Norman, C.; Walsh, S.

    2016-01-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  4. Effects of Measurement Errors on Individual Tree Stem Volume Estimates for the Austrian National Forest Inventory

    Science.gov (United States)

    Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens. Schadauer

    2014-01-01

    National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...

  5. Block volume estimation from the discontinuity spacing measurements of mesozoic limestone quarries, Karaburun Peninsula, Turkey.

    Science.gov (United States)

    Elci, Hakan; Turk, Necdet

    2014-01-01

    Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (J(v)) method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (V(b)), the mean volumetric joint count (J(vb)) and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (V(in)) and volumetric joint count (J(vi)) values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements.

  6. Block Volume Estimation from the Discontinuity Spacing Measurements of Mesozoic Limestone Quarries, Karaburun Peninsula, Turkey

    Directory of Open Access Journals (Sweden)

    Hakan Elci

    2014-01-01

    Full Text Available Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (Jv method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (Vb, the mean volumetric joint count (Jvb and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (Vin and volumetric joint count (Jvi values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements.

  7. NUMATH: a nuclear material holdup estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.

    1982-01-01

    NUMATH provides inventory estimation by utilizing previous inventory measurements, operating data, and, where available, on-line process measurements. For the present time, NUMATH's purpose is to provide a reasonable, near-real-time estimate of material inventory until accurate inventory determination can be obtained from chemical analysis. Ultimately, it is intended that NUMATH will further utilize on-line analyzers and more advanced calculational techniques to provide more accurate inventory determinations and estimates

  8. Estimation of release of tritium from measurements of air concentrations in reactor building of PHWR

    International Nuclear Information System (INIS)

    Purohit, R.G.; Sarkar, P.K.

    2010-01-01

    In this paper an attempt has been made to estimate the releases from measured air concentrations of tritium at various locations in Reactor Building (RB). Design data of Kaiga Generating Station and sample measurements of tritium concentrations at various locations in RB and discharges for a period of fortnight were used. A comparison has also been made with actual measurements. It has been observed that there is good matching in estimated and actual measurements of tritium release on some days while on some days there is high difference

  9. Estimation of the thermal diffusion coefficient in fusion plasmas taking frequency measurement uncertainties into account

    International Nuclear Information System (INIS)

    Van Berkel, M; Hogeweij, G M D; Van den Brand, H; De Baar, M R; Zwart, H J; Vandersteen, G

    2014-01-01

    In this paper, the estimation of the thermal diffusivity from perturbative experiments in fusion plasmas is discussed. The measurements used to estimate the thermal diffusivity suffer from stochastic noise. Accurate estimation of the thermal diffusivity should take this into account. It will be shown that formulas found in the literature often result in a thermal diffusivity that has a bias (a difference between the estimated value and the actual value that remains even if more measurements are added) or have an unnecessarily large uncertainty. This will be shown by modeling a plasma using only diffusion as heat transport mechanism and measurement noise based on ASDEX Upgrade measurements. The Fourier coefficients of a temperature perturbation will exhibit noise from the circular complex normal distribution (CCND). Based on Fourier coefficients distributed according to a CCND, it is shown that the resulting probability density function of the thermal diffusivity is an inverse non-central chi-squared distribution. The thermal diffusivity that is found by sampling this distribution will always be biased, and averaging of multiple estimated diffusivities will not necessarily improve the estimation. Confidence bounds are constructed to illustrate the uncertainty in the diffusivity using several formulas that are equivalent in the noiseless case. Finally, a different method of averaging, that reduces the uncertainty significantly, is suggested. The methodology is also extended to the case where damping is included, and it is explained how to include the cylindrical geometry. (paper)

  10. Crustal composition in the Hidaka Metamorphic Belt estimated from seismic velocity by laboratory measurements

    Science.gov (United States)

    Yamauchi, K.; Ishikawa, M.; Sato, H.; Iwasaki, T.; Toyoshima, T.

    2015-12-01

    To understand the dynamics of the lithosphere in subduction systems, the knowledge of rock composition is significant. However, rock composition of the overriding plate is still poorly understood. To estimate rock composition of the lithosphere, it is an effective method to compare the elastic wave velocities measured under the high pressure and temperature condition with the seismic velocities obtained by active source experiment and earthquake observation. Due to an arc-arc collision in central Hokkaido, middle to lower crust is exposed along the Hidaka Metamorphic Belt (HMB), providing exceptional opportunities to study crust composition of an island arc. Across the HMB, P-wave velocity model has been constructed by refraction/wide-angle reflection seismic profiling (Iwasaki et al., 2004). Furthermore, because of the interpretation of the crustal structure (Ito, 2000), we can follow a continuous pass from the surface to the middle-lower crust. We corrected representative rock samples from HMB and measured ultrasonic P-wave (Vp) and S-wave velocities (Vs) under the pressure up to 1.0 GPa in a temperature range from 25 to 400 °C. For example, the Vp values measured at 25 °C and 0.5 GPa are 5.88 km/s for the granite (74.29 wt.% SiO2), 6.02-6.34 km/s for the tonalites (66.31-68.92 wt.% SiO2), 6.34 km/s for the gneiss (64.69 wt.% SiO2), 6.41-7.05 km/s for the amphibolites (50.06-51.13 wt.% SiO2), and 7.42 km/s for the mafic granulite (50.94 wt.% SiO2). And, Vp of tonalites showed a correlation with SiO2 (wt.%). Comparing with the velocity profiles across the HMB (Iwasaki et al., 2004), we estimate that the lower to middle crust consists of amphibolite and tonalite, and the estimated acoustic impedance contrast between them suggests an existence of a clear reflective boundary, which accords well to the obtained seismic reflection profile (Iwasaki et al., 2014). And, we can obtain the same tendency from comparing measured Vp/Vs ratio and Vp/Vs ratio structure model

  11. Effect of large weight reductions on measured and estimated kidney function

    DEFF Research Database (Denmark)

    von Scholten, Bernt Johan; Persson, Frederik; Svane, Maria S

    2017-01-01

    GFR (creatinine-based equations), whereas measured GFR (mGFR) and cystatin C-based eGFR would be unaffected if adjusted for body surface area. METHODS: Prospective, intervention study including 19 patients. All attended a baseline visit before gastric bypass surgery followed by a visit six months post-surgery. m...... for body surface area was unchanged. Estimates of GFR based on creatinine overestimate renal function likely due to changes in muscle mass, whereas cystatin C based estimates are unaffected. TRIAL REGISTRATION: ClinicalTrials.gov, NCT02138565 . Date of registration: March 24, 2014....

  12. Estimating Contact Exposure in Football Using the Head Impact Exposure Estimate.

    Science.gov (United States)

    Kerr, Zachary Y; Littleton, Ashley C; Cox, Leah M; DeFreese, J D; Varangis, Eleanna; Lynall, Robert C; Schmidt, Julianne D; Marshall, Stephen W; Guskiewicz, Kevin M

    2015-07-15

    Over the past decade, there has been significant debate regarding the effect of cumulative subconcussive head impacts on short and long-term neurological impairment. This debate remains unresolved, because valid epidemiological estimates of athletes' total contact exposure are lacking. We present a measure to estimate the total hours of contact exposure in football over the majority of an athlete's lifespan. Through a structured oral interview, former football players provided information related to primary position played and participation in games and practice contacts during the pre-season, regular season, and post-season of each year of their high school, college, and professional football careers. Spring football for college was also included. We calculated contact exposure estimates for 64 former football players (n = 32 college football only, n = 32 professional and college football). The head impact exposure estimate (HIEE) discriminated between individuals who stopped after college football, and individuals who played professional football (p < 0.001). The HIEE measure was independent of concussion history (p = 0.82). Estimating total hours of contact exposure may allow for the detection of differences between individuals with variation in subconcussive impacts, regardless of concussion history. This measure is valuable for the surveillance of subconcussive impacts and their associated potential negative effects.

  13. Estimation of the POD function and the LOD of a qualitative microbiological measurement method.

    Science.gov (United States)

    Wilrich, Cordula; Wilrich, Peter-Theodor

    2009-01-01

    Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.

  14. Process measures or patient reported experience measures (PREMs) for comparing performance across providers? A study of measures related to access and continuity in Swedish primary care.

    Science.gov (United States)

    Glenngård, Anna H; Anell, Anders

    2018-01-01

    Aim To study (a) the covariation between patient reported experience measures (PREMs) and registered process measures of access and continuity when ranking providers in a primary care setting, and (b) whether registered process measures or PREMs provided more or less information about potential linkages between levels of access and continuity and explaining variables. Access and continuity are important objectives in primary care. They can be measured through registered process measures or PREMs. These measures do not necessarily converge in terms of outcomes. Patient views are affected by factors not necessarily reflecting quality of services. Results from surveys are often uncertain due to low response rates, particularly in vulnerable groups. The quality of process measures, on the other hand, may be influenced by registration practices and are often more easy to manipulate. With increased transparency and use of quality measures for management and governance purposes, knowledge about the pros and cons of using different measures to assess the performance across providers are important. Four regression models were developed with registered process measures and PREMs of access and continuity as dependent variables. Independent variables were characteristics of providers as well as geographical location and degree of competition facing providers. Data were taken from two large Swedish county councils. Findings Although ranking of providers is sensitive to the measure used, the results suggest that providers performing well with respect to one measure also tended to perform well with respect to the other. As process measures are easier and quicker to collect they may be looked upon as the preferred option. PREMs were better than process measures when exploring factors that contributed to variation in performance across providers in our study; however, if the purpose of comparison is continuous learning and development of services, a combination of PREMs and

  15. Measurement-Based Transmission Line Parameter Estimation with Adaptive Data Selection Scheme

    DEFF Research Database (Denmark)

    Li, Changgang; Zhang, Yaping; Zhang, Hengxu

    2017-01-01

    Accurate parameters of transmission lines are critical for power system operation and control decision making. Transmission line parameter estimation based on measured data is an effective way to enhance the validity of the parameters. This paper proposes a multi-point transmission line parameter...

  16. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    International Nuclear Information System (INIS)

    Pontailler, J.-Y.; Hymus, G.J.; Drake, B.G.

    2003-01-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m 2 plots in February 2000 and two 4m 2 plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  17. Uncertainty in techno-economic estimates of cellulosic ethanol production due to experimental measurement uncertainty

    Directory of Open Access Journals (Sweden)

    Vicari Kristin J

    2012-04-01

    Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of

  18. Estimating intrinsic and extrinsic noise from single-cell gene expression measurements

    Science.gov (United States)

    Fu, Audrey Qiuyan; Pachter, Lior

    2017-01-01

    Gene expression is stochastic and displays variation (“noise”) both within and between cells. Intracellular (intrinsic) variance can be distinguished from extracellular (extrinsic) variance by applying the law of total variance to data from two-reporter assays that probe expression of identically regulated gene pairs in single cells. We examine established formulas [Elowitz, M. B., A. J. Levine, E. D. Siggia and P. S. Swain (2002): “Stochastic gene expression in a single cell,” Science, 297, 1183–1186.] for the estimation of intrinsic and extrinsic noise and provide interpretations of them in terms of a hierarchical model. This allows us to derive alternative estimators that minimize bias or mean squared error. We provide a geometric interpretation of these results that clarifies the interpretation in [Elowitz, M. B., A. J. Levine, E. D. Siggia and P. S. Swain (2002): “Stochastic gene expression in a single cell,” Science, 297, 1183–1186.]. We also demonstrate through simulation and re-analysis of published data that the distribution assumptions underlying the hierarchical model have to be satisfied for the estimators to produce sensible results, which highlights the importance of normalization. PMID:27875323

  19. Estimating Wet Bulb Globe Temperature Using Standard Meteorological Measurements

    International Nuclear Information System (INIS)

    Hunter, C.H.

    1999-01-01

    The heat stress management program at the Department of Energy''s Savannah River Site (SRS) requires implementation of protective controls on outdoor work based on observed values of wet bulb globe temperature (WBGT). To ensure continued compliance with heat stress program requirements, a computer algorithm was developed which calculates an estimate of WBGT using standard meteorological measurements. In addition, scripts were developed to generate a calculation every 15 minutes and post the results to an Intranet web site

  20. Chapter 21: Estimating Net Savings - Common Practices. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Violette, Daniel M. [Navigant, Boulder, CO (United States); Rathbun, Pamela [Tetra Tech, Madison, WI (United States)

    2017-11-02

    This chapter focuses on the methods used to estimate net energy savings in evaluation, measurement, and verification (EM and V) studies for energy efficiency (EE) programs. The chapter provides a definition of net savings, which remains an unsettled topic both within the EE evaluation community and across the broader public policy evaluation community, particularly in the context of attribution of savings to a program. The chapter differs from the measure-specific Uniform Methods Project (UMP) chapters in both its approach and work product. Unlike other UMP resources that provide recommended protocols for determining gross energy savings, this chapter describes and compares the current industry practices for determining net energy savings but does not prescribe methods.

  1. Estimation and prediction of convection-diffusion-reaction systems from point measurement

    NARCIS (Netherlands)

    Vries, D.

    2008-01-01

    Different procedures with respect to estimation and prediction of systems characterized by convection, diffusion and reactions on the basis of point measurement data, have been studied. Two applications of these convection-diffusion-reaction (CDR) systems have been used as a case study of the

  2. Estimation of the terrestrial gamma-ray levels from car-borne measurements

    International Nuclear Information System (INIS)

    Badran, H.M.

    1998-01-01

    A place to place variation of the gamma-radiation has been measured. The terrestrial gamma-ray levels were obtained with a portable Nal(Tl) detector. Gamma-ray levels were measured inside a car for a distance of about 220 km, from Norman up to Tulsa, Oklahoma, USA. Simultaneous measurements have also been carried out outside the vehicle and at distances 1 m and 5 m from the car. A series of data was collected every 1 mile (∼ 1.6 km). The measurements were also repeated different time under different conditions. The measured car-borne levels were correlated with the outdoor equivalent levels at 1 m above flat ground. The result permits a good estimation of the outdoor gamma-ray levels from the car measurements after the correction due to the vehicle shielding

  3. TP89 - SIRZ Decomposition Spectral Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Seetho, Isacc M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, Jerel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, Jr., Harry E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-08

    The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.

  4. Estimation of piping temperature fluctuations based on external strain measurements

    International Nuclear Information System (INIS)

    Morilhat, P.; Maye, J.P.

    1993-01-01

    Due to the difficulty to carry out measurements at the inner sides of nuclear reactor piping subjected to thermal transients, temperature and stress variations in the pipe walls are estimated by means of external thermocouples and strain-gauges. This inverse problem is solved by spectral analysis. Since the wall harmonic transfer function (response to a harmonic load) is known, the inner side signal will be obtained by convolution of the inverse transfer function of the system and of the strain measurement enables detection of internal temperature fluctuations in a frequency range beyond the scope of the thermocouples. (authors). 5 figs., 3 refs

  5. Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.

    Science.gov (United States)

    Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller

    2015-01-01

    An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.

  6. Psychometrics of an original measure of barriers to providing family planning information: Implications for social service providers.

    Science.gov (United States)

    Bell, Melissa M; Newhill, Christina E

    2017-07-01

    Social service professionals can face challenges in the course of providing family planning information to their clients. This article reports findings from a study that developed an original 27-item measure, the Reproductive Counseling Obstacle Scale (RCOS) designed to measure such obstacles based conceptually on Bandura's social cognitive theory (1986). We examine the reliability and factor structure of the RCOS using a sample of licensed social workers (N = 197). A 20-item revised version of the RCOS was derived using principal component factor analysis. Results indicate that barriers to discussing family planning, as measured by the RCOS, appear to be best represented by a two-factor solution, reflecting self-efficacy/interest and perceived professional obligation/moral concerns. Implications for practice and future research are discussed.

  7. Precipitation Estimation Using Combined Radar/Radiometer Measurements Within the GPM Framework

    Science.gov (United States)

    Hou, Arthur

    2012-01-01

    satellite of JAXA, (3) the Multi-Frequency Microwave Scanning Radiometer (MADRAS) and the multi-channel microwave humidity sounder (SAPHIR) on the French-Indian Megha- Tropiques satellite, (4) the Microwave Humidity Sounder (MHS) on the National Oceanic and Atmospheric Administration (NOAA)-19, (5) MHS instruments on MetOp satellites launched by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), (6) the Advanced Technology Microwave Sounder (ATMS) on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), and (7) ATMS instruments on the NOAA-NASA Joint Polar Satellite System (JPSS) satellites. Data from Chinese and Russian microwave radiometers may also become available through international collaboration under the auspices of the Committee on Earth Observation Satellites (CEOS) and Group on Earth Observations (GEO). The current generation of global rainfall products combines observations from a network of uncoordinated satellite missions using a variety of merging techniques. GPM will provide next-generation precipitation products characterized by: (1) more accurate instantaneous precipitation estimate (especially for light rain and cold-season solid precipitation), (2) intercalibrated microwave brightness temperatures from constellation radiometers within a consistent framework, and (3) unified precipitation retrievals from constellation radiometers using a common a priori hydrometeor database constrained by combined radar/radiometer measurements provided by the GPM Core Observatory.

  8. State estimation for large-scale wastewater treatment plants.

    Science.gov (United States)

    Busch, Jan; Elixmann, David; Kühl, Peter; Gerkens, Carine; Schlöder, Johannes P; Bock, Hans G; Marquardt, Wolfgang

    2013-09-01

    Many relevant process states in wastewater treatment are not measurable, or their measurements are subject to considerable uncertainty. This poses a serious problem for process monitoring and control. Model-based state estimation can provide estimates of the unknown states and increase the reliability of measurements. In this paper, an integrated approach is presented for the optimization-based sensor network design and the estimation problem. Using the ASM1 model in the reference scenario BSM1, a cost-optimal sensor network is designed and the prominent estimators EKF and MHE are evaluated. Very good estimation results for the system comprising 78 states are found requiring sensor networks of only moderate complexity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements

    Directory of Open Access Journals (Sweden)

    A. Canals

    2002-09-01

    Full Text Available Interplanetary scintillation measurements can yield estimates of a large number of solar wind parameters, including bulk flow speed, variation in bulk velocity along the observing path through the solar wind and random variation in transverse velocity. This last parameter is of particular interest, as it can indicate the flux of low-frequency Alfvén waves, and the dissipation of these waves has been proposed as an acceleration mechanism for the fast solar wind. Analysis of IPS data is, however, a significantly unresolved problem and a variety of a priori assumptions must be made in interpreting the data. Furthermore, the results may be affected by the physical structure of the radio source and by variations in the solar wind along the scintillation ray path. We have used observations of simple point-like radio sources made with EISCAT between 1994 and 1998 to obtain estimates of random transverse velocity in the fast solar wind. The results obtained with various a priori assumptions made in the analysis are compared, and we hope thereby to be able to provide some indication of the reliability of our estimates of random transverse velocity and the variation of this parameter with distance from the Sun.Key words. Interplanetary physics (MHD waves and turbulence; solar wind plasma; instruments and techniques

  10. Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements

    Directory of Open Access Journals (Sweden)

    A. Canals

    Full Text Available Interplanetary scintillation measurements can yield estimates of a large number of solar wind parameters, including bulk flow speed, variation in bulk velocity along the observing path through the solar wind and random variation in transverse velocity. This last parameter is of particular interest, as it can indicate the flux of low-frequency Alfvén waves, and the dissipation of these waves has been proposed as an acceleration mechanism for the fast solar wind. Analysis of IPS data is, however, a significantly unresolved problem and a variety of a priori assumptions must be made in interpreting the data. Furthermore, the results may be affected by the physical structure of the radio source and by variations in the solar wind along the scintillation ray path. We have used observations of simple point-like radio sources made with EISCAT between 1994 and 1998 to obtain estimates of random transverse velocity in the fast solar wind. The results obtained with various a priori assumptions made in the analysis are compared, and we hope thereby to be able to provide some indication of the reliability of our estimates of random transverse velocity and the variation of this parameter with distance from the Sun.

    Key words. Interplanetary physics (MHD waves and turbulence; solar wind plasma; instruments and techniques

  11. Thyroid doses in Belarus resulting from the Chernobyl accident: comparison of the estimates based on direct thyroid measurements and on measurements of 131I in milk

    International Nuclear Information System (INIS)

    Shinkarev, Sergey; Gavrilin, Yury; Khrouch, Valery; Savkin, Mikhail; Bouville, Andre; Luckyanov, Nicholas

    2008-01-01

    A substantial increase of childhood cancer cases observed in Belarus, Ukraine and Russia after the Chernobyl accident has been associated with thyroid exposure to radio iodines following the accident. A large number of direct thyroid measurements (i.e. measurement of the exposure rate near the thyroid of the subject)were conducted in Belarus during a few weeks after the accident. Individual thyroid doses based on results of the direct thyroid measurements were estimated for about 126,000 Belarusian residents and settlement-average thyroid doses for adults were calculated for 426 contaminated settlements in Gomel and Mogilev Oblasts. Another set of settlement-average thyroid doses for adults was estimated based on results of activity measurements in milk samples for 28 settlements (with not less than 2 spectrometric measurements) and 155 settlements (with not less than 5 total beta-activity measurements) in Gomel and Mogilev Oblasts. Concentrations of 131 I in milk were derived from these measurements. In the estimation of this set of thyroid doses, it was assumed that adults consumed 0.5 L d -1 of milk locally produced. The two sets of dose estimates were compared for 47 settlements, for which simultaneously were available a dose estimate based on thyroid measurements and a dose estimate based either on spectrometric or radiometric milk data. The settlement average thyroid doses based on milk activity measurements were higher than those based on direct thyroid measurements by a factor of 1.8 for total beta-activity measurements (30 settlements were compared) and by a factor of 2.4 for spectrometric measurements (17 settlements). This systematic difference can be explained by overestimation of the milk consumption rate used in the calculation of the milk-based thyroid doses and/or by application of individual countermeasures by people. (author)

  12. Estimation of potassium concentration in coconut water by beta radioactivity measurement

    International Nuclear Information System (INIS)

    Reddy, P.J.; Narayani, K.; Bhade, S.P.D.; Anilkumar, S.; Kolekar, R.V.; Singh, Rajvir; Pradeepkumar, K.S.

    2014-01-01

    Potassium is widely distributed in soil, in all vegetable, fruits and animal tissues. Approximately half the radioactivity found in humans comes from 40 K. Potassium is an essential element in our diet since it is required for proper nerve and muscle function, as well as for maintaining the fluid balance of cells and heart rhythm. Potassium can enter the body mainly consuming fruits, vegetables and food. Tender coconut water is consumed widely as natural refreshing drink which is rich in potassium. The simple way to determine 40 K activity is by gamma ray spectrometry. However, the low abundance of this gamma photon makes the technique less sensitive compared to gross beta measurement. Many analytical methods are reported for potassium estimation which is time consuming and destructive in nature. A unique way to estimate 40 K by beta activity is by Cerenkov Counting technique using Liquid Scintillation Analyzer. Also much lower detection limit is achieved, allowing for greater precision. In this work, we have compared two methods to arrive at the potassium concentration in tender and matured coconut water by measuring 40 K. One is non-scintillator method based on measurement of the Cerenkov radiation generated from the high-energy β of 40 K. The second method is based on beta activity measurement using low background Gas flow counter

  13. Surface Runoff Estimation Using SMOS Observations, Rain-gauge Measurements and Satellite Precipitation Estimations. Comparison with Model Predictions

    Science.gov (United States)

    Garcia Leal, Julio A.; Lopez-Baeza, Ernesto; Khodayar, Samiro; Estrela, Teodoro; Fidalgo, Arancha; Gabaldo, Onofre; Kuligowski, Robert; Herrera, Eddy

    Surface runoff is defined as the amount of water that originates from precipitation, does not infiltrates due to soil saturation and therefore circulates over the surface. A good estimation of runoff is useful for the design of draining systems, structures for flood control and soil utilisation. For runoff estimation there exist different methods such as (i) rational method, (ii) isochrone method, (iii) triangular hydrograph, (iv) non-dimensional SCS hydrograph, (v) Temez hydrograph, (vi) kinematic wave model, represented by the dynamics and kinematics equations for a uniforme precipitation regime, and (vii) SCS-CN (Soil Conservation Service Curve Number) model. This work presents a way of estimating precipitation runoff through the SCS-CN model, using SMOS (Soil Moisture and Ocean Salinity) mission soil moisture observations and rain-gauge measurements, as well as satellite precipitation estimations. The area of application is the Jucar River Basin Authority area where one of the objectives is to develop the SCS-CN model in a spatial way. The results were compared to simulations performed with the 7-km COSMO-CLM (COnsortium for Small-scale MOdelling, COSMO model in CLimate Mode) model. The use of SMOS soil moisture as input to the COSMO-CLM model will certainly improve model simulations.

  14. Estimation of diffuse from measured global solar radiation

    International Nuclear Information System (INIS)

    Moriarty, W.W.

    1991-01-01

    A data set of quality controlled radiation observations from stations scattered throughout Australia was formed and further screened to remove residual doubtful observations. It was then divided into groups by solar elevation, and used to find average relationships for each elevation group between relative global radiation (clearness index - the measured global radiation expressed as a proportion of the radiation on a horizontal surface at the top of the atmosphere) and relative diffuse radiation. Clear-cut relationships were found, which were then fitted by polynomial expressions giving the relative diffuse radiation as a function of relative global radiation and solar elevation. When these expressions were used to estimate the diffuse radiation from the global, the results had a slightly smaller spread of errors than those from an earlier technique given by Spencer. It was found that the errors were related to cloud amount, and further relationships were developed giving the errors as functions of global radiation, solar elevation, and the fraction of sky obscured by high cloud and by opaque (low and middle level) cloud. When these relationships were used to adjust the first estimates of diffuse radiation, there was a considerable reduction in the number of large errors

  15. Brittleness estimation from seismic measurements in unconventional reservoirs: Application to the Barnett shale

    Science.gov (United States)

    Perez Altimar, Roderick

    Brittleness is a key characteristic for effective reservoir stimulation and is mainly controlled by mineralogy in unconventional reservoirs. Unfortunately, there is no universally accepted means of predicting brittleness from measures made in wells or from surface seismic data. Brittleness indices (BI) are based on mineralogy, while brittleness average estimations are based on Young's modulus and Poisson's ratio. I evaluate two of the more popular brittleness estimation techniques and apply them to a Barnett Shale seismic survey in order to estimate its geomechanical properties. Using specialized logging tools such as elemental capture tool, density, and P- and S wave sonic logs calibrated to previous core descriptions and laboratory measurements, I create a survey-specific BI template in Young's modulus versus Poisson's ratio or alternatively lambdarho versus murho space. I use this template to predict BI from elastic parameters computed from surface seismic data, providing a continuous estimate of BI estimate in the Barnett Shale survey. Extracting lambdarho-murho values from microseismic event locations, I compute brittleness index from the template and find that most microsemic events occur in the more brittle part of the reservoir. My template is validated through a suite of microseismic experiments that shows most events occurring in brittle zones, fewer events in the ductile shale, and fewer events still in the limestone fracture barriers. Estimated ultimate recovery (EUR) is an estimate of the expected total production of oil and/or gas for the economic life of a well and is widely used in the evaluation of resource play reserves. In the literature it is possible to find several approaches for forecasting purposes and economic analyses. However, the extension to newer infill wells is somewhat challenging because production forecasts in unconventional reservoirs are a function of both completion effectiveness and reservoir quality. For shale gas reservoirs

  16. Measures to reduce car-fleet consumption - Estimation of effects

    International Nuclear Information System (INIS)

    Iten, R.; Hammer, S.; Keller, M.; Schmidt, N.; Sammer, K.; Wuestenhagen, R.

    2005-09-01

    This comprehensive report for the Swiss Federal Office of Energy (SFOE) takes a look at the results of a study that estimated the effects of measures that were to be taken in order to reduce the fuel consumption of fleets of vehicles as part of the SwissEnergy programme. The research reported on aimed to estimate the effects of the Energy Label on energy consumption and research concerning the results to be expected from the introduction of a bonus-malus system. Questions reviewed include the effect of fuel consumption data on making decisions concerning which vehicle to purchase, the effects of the Energy Label on consumption, the awareness of other appropriate information sources, the possible effects of a bonus-malus system and how the effectiveness of the Energy Label could be improved. The answers and results obtained are reviewed and commented on. Finally, an overall appraisal of the situation is presented and recommendations for increasing the effectiveness of the Energy Label are made

  17. Estimating drizzle drop size and precipitation rate using two-colour lidar measurements

    Directory of Open Access Journals (Sweden)

    C. D. Westbrook

    2010-06-01

    Full Text Available A method to estimate the size and liquid water content of drizzle drops using lidar measurements at two wavelengths is described. The method exploits the differential absorption of infrared light by liquid water at 905 nm and 1.5 μm, which leads to a different backscatter cross section for water drops larger than ≈50 μm. The ratio of backscatter measured from drizzle samples below cloud base at these two wavelengths (the colour ratio provides a measure of the median volume drop diameter D0. This is a strong effect: for D0=200 μm, a colour ratio of ≈6 dB is predicted. Once D0 is known, the measured backscatter at 905 nm can be used to calculate the liquid water content (LWC and other moments of the drizzle drop distribution.

    The method is applied to observations of drizzle falling from stratocumulus and stratus clouds. High resolution (32 s, 36 m profiles of D0, LWC and precipitation rate R are derived. The main sources of error in the technique are the need to assume a value for the dispersion parameter μ in the drop size spectrum (leading to at most a 35% error in R and the influence of aerosol returns on the retrieval (≈10% error in R for the cases considered here. Radar reflectivities are also computed from the lidar data, and compared to independent measurements from a colocated cloud radar, offering independent validation of the derived drop size distributions.

  18. Measurement of circulation around wing-tip vortices and estimation of lift forces using stereo PIV

    Science.gov (United States)

    Asano, Shinichiro; Sato, Haru; Sakakibara, Jun

    2017-11-01

    Applying the flapping flight to the development of an aircraft as Mars space probe and a small aircraft called MAV (Micro Air Vehicle) is considered. This is because Reynolds number assumed as the condition of these aircrafts is low and similar to of insects and small birds flapping on the earth. However, it is difficult to measure the flow around the airfoil in flapping flight directly because of its three-dimensional and unsteady characteristics. Hence, there is an attempt to estimate the flow field and aerodynamics by measuring the wake of the airfoil using PIV, for example the lift estimation method based on a wing-tip vortex. In this study, at the angle of attack including the angle after stall, we measured the wing-tip vortex of a NACA 0015 cross-sectional and rectangular planform airfoil using stereo PIV. The circulation of the wing-tip vortex was calculated from the obtained velocity field, and the lift force was estimated based on Kutta-Joukowski theorem. Then, the validity of this estimation method was examined by comparing the estimated lift force and the force balance data at various angles of attack. The experiment results are going to be presented in the conference.

  19. The assessment of Global Precipitation Measurement estimates over the Indian subcontinent

    Science.gov (United States)

    Murali Krishna, U. V.; Das, Subrata Kumar; Deshpande, Sachin M.; Doiphode, S. L.; Pandithurai, G.

    2017-08-01

    Accurate and real-time precipitation estimation is a challenging task for current and future spaceborne measurements, which is essential to understand the global hydrological cycle. Recently, the Global Precipitation Measurement (GPM) satellites were launched as a next-generation rainfall mission for observing the global precipitation characteristics. The purpose of the GPM is to enhance the spatiotemporal resolution of global precipitation. The main objective of the present study is to assess the rainfall products from the GPM, especially the Integrated Multi-satellitE Retrievals for the GPM (IMERG) data by comparing with the ground-based observations. The multitemporal scale evaluations of rainfall involving subdaily, diurnal, monthly, and seasonal scales were performed over the Indian subcontinent. The comparison shows that the IMERG performed better than the Tropical Rainfall Measuring Mission (TRMM)-3B42, although both rainfall products underestimated the observed rainfall compared to the ground-based measurements. The analyses also reveal that the TRMM-3B42 and IMERG data sets are able to represent the large-scale monsoon rainfall spatial features but are having region-specific biases. The IMERG shows significant improvement in low rainfall estimates compared to the TRMM-3B42 for selected regions. In the spatial distribution, the IMERG shows higher rain rates compared to the TRMM-3B42, due to its enhanced spatial and temporal resolutions. Apart from this, the characteristics of raindrop size distribution (DSD) obtained from the GPM mission dual-frequency precipitation radar is assessed over the complex mountain terrain site in the Western Ghats, India, using the DSD measured by a Joss-Waldvogel disdrometer.

  20. Connections of geometric measure of entanglement of pure symmetric states to quantum state estimation

    International Nuclear Information System (INIS)

    Chen Lin; Zhu Huangjun; Wei, Tzu-Chieh

    2011-01-01

    We study the geometric measure of entanglement (GM) of pure symmetric states related to rank 1 positive-operator-valued measures (POVMs) and establish a general connection with quantum state estimation theory, especially the maximum likelihood principle. Based on this connection, we provide a method for computing the GM of these states and demonstrate its additivity property under certain conditions. In particular, we prove the additivity of the GM of pure symmetric multiqubit states whose Majorana points under Majorana representation are distributed within a half sphere, including all pure symmetric three-qubit states. We then introduce a family of symmetric states that are generated from mutually unbiased bases and derive an analytical formula for their GM. These states include Dicke states as special cases, which have already been realized in experiments. We also derive the GM of symmetric states generated from symmetric informationally complete POVMs (SIC POVMs) and use it to characterize all inequivalent SIC POVMs in three-dimensional Hilbert space that are covariant with respect to the Heisenberg-Weyl group. Finally, we describe an experimental scheme for creating the symmetric multiqubit states studied in this article and a possible scheme for measuring the permanence of the related Gram matrix.

  1. Effects of performance measure implementation on clinical manager and provider motivation.

    Science.gov (United States)

    Damschroder, Laura J; Robinson, Claire H; Francis, Joseph; Bentley, Douglas R; Krein, Sarah L; Rosland, Ann-Marie; Hofer, Timothy P; Kerr, Eve A

    2014-12-01

    Clinical performance measurement has been a key element of efforts to transform the Veterans Health Administration (VHA). However, there are a number of signs that current performance measurement systems used within and outside the VHA may be reaching the point of maximum benefit to care and in some settings, may be resulting in negative consequences to care, including overtreatment and diminished attention to patient needs and preferences. Our research group has been involved in a long-standing partnership with the office responsible for clinical performance measurement in the VHA to understand and develop potential strategies to mitigate the unintended consequences of measurement. Our aim was to understand how the implementation of diabetes performance measures (PMs) influences management actions and day-to-day clinical practice. This is a mixed methods study design based on quantitative administrative data to select study facilities and quantitative data from semi-structured interviews. Sixty-two network-level and facility-level executives, managers, front-line providers and staff participated in the study. Qualitative content analyses were guided by a team-based consensus approach using verbatim interview transcripts. A published interpretive motivation theory framework is used to describe potential contributions of local implementation strategies to unintended consequences of PMs. Implementation strategies used by management affect providers' response to PMs, which in turn potentially undermines provision of high-quality patient-centered care. These include: 1) feedback reports to providers that are dissociated from a realistic capability to address performance gaps; 2) evaluative criteria set by managers that are at odds with patient-centered care; and 3) pressure created by managers' narrow focus on gaps in PMs that is viewed as more punitive than motivating. Next steps include working with VHA leaders to develop and test implementation approaches to help

  2. Block Volume Estimation from the Discontinuity Spacing Measurements of Mesozoic Limestone Quarries, Karaburun Peninsula, Turkey

    OpenAIRE

    Elci, Hakan; Turk, Necdet

    2014-01-01

    Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to ...

  3. Standard error of measurement of 5 health utility indexes across the range of health for use in estimating reliability and responsiveness.

    Science.gov (United States)

    Palta, Mari; Chen, Han-Yang; Kaplan, Robert M; Feeny, David; Cherepanov, Dasha; Fryback, Dennis G

    2011-01-01

    Standard errors of measurement (SEMs) of health-related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics, and is a component of reliability. To estimate the SEM of 5 HRQoL indexes. The National Health Measurement Study (NHMS) was a population-based survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures. A total of 3844 randomly selected adults from the noninstitutionalized population aged 35 to 89 y in the contiguous United States and 265 cataract patients. The SF6-36v2™, QWB-SA, EQ-5D, HUI2, and HUI3 were included. An item-response theory approach captured joint variation in indexes into a composite construct of health (theta). The authors estimated 1) the test-retest standard deviation (SEM-TR) from COMHS, 2) the structural standard deviation (SEM-S) around theta from NHMS, and 3) reliability coefficients. SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2), and 0.134 (HUI3), whereas SEM-S was 0.071, 0.094, 0.084, 0.074, and 0.117, respectively. These yield reliability coefficients 0.66 (COMHS) and 0.71 (NHMS) for SF-6D, 0.59 and 0.64 for QWB-SA, 0.61 and 0.70 for EQ-5D, 0.64 and 0.80 for HUI2, and 0.75 and 0.77 for HUI3, respectively. The SEM varied across levels of health, especially for HUI2, HUI3, and EQ-5D, and was influenced by ceiling effects. Limitations. Repeated measures were 5 mo apart, and estimated theta contained measurement error. The 2 types of SEM are similar and substantial for all the indexes and vary across health.

  4. Kalman-filtered compressive sensing for high resolution estimation of anthropogenic greenhouse gas emissions from sparse measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet; Michalak, Anna M.; van Bloemen Waanders, Bart Gustaaf; McKenna, Sean Andrew

    2013-09-01

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions which can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.

  5. Oxygen transfer rate estimation in oxidation ditches from clean water measurements.

    Science.gov (United States)

    Abusam, A; Keesman, K J; Meinema, K; Van Straten, G

    2001-06-01

    Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).

  6. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pontailler, J.-Y. [Univ. Paris-Sud XI, Dept. d' Ecophysiologie Vegetale, Orsay Cedex (France); Hymus, G.J.; Drake, B.G. [Smithsonian Environmental Research Center, Kennedy Space Center, Florida (United States)

    2003-06-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m{sup 2} plots in February 2000 and two 4m{sup 2} plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  7. Progress on Poverty? New Estimates of Historical Trends Using an Anchored Supplemental Poverty Measure

    Science.gov (United States)

    Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane

    2016-01-01

    This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau’s recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families’ expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM’s 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a posttax/posttransfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time. PMID:27352076

  8. Progress on Poverty? New Estimates of Historical Trends Using an Anchored Supplemental Poverty Measure.

    Science.gov (United States)

    Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane

    2016-08-01

    This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau's recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families' expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM's 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a post-tax/post-transfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time.

  9. A computationally inexpensive model for estimating dimensional measurement uncertainty due to x-ray computed tomography instrument misalignments

    Science.gov (United States)

    Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim

    2018-06-01

    The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.

  10. Estimating the Wind Resource in Uttarakhand: Comparison of Dynamic Downscaling with Doppler Lidar Wind Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, J. K. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Pukayastha, A. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Martin, C. [Univ. of Colorado, Boulder, CO (United States); Newsom, R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-03-01

    Previous estimates of the wind resources in Uttarakhand, India, suggest minimal wind resources in this region. To explore whether or not the complex terrain in fact provides localized regions of wind resource, the authors of this study employed a dynamic down scaling method with the Weather Research and Forecasting model, providing detailed estimates of winds at approximately 1 km resolution in the finest nested simulation.

  11. Measurement of total risk of spontaneous abortion: the virtue of conditional risk estimation

    DEFF Research Database (Denmark)

    Modvig, J; Schmidt, L; Damsgaard, M T

    1990-01-01

    The concepts, methods, and problems of measuring spontaneous abortion risk are reviewed. The problems touched on include the process of pregnancy verification, the changes in risk by gestational age and maternal age, and the presence of induced abortions. Methods used in studies of spontaneous...... abortion risk include biochemical assays as well as life table technique, although the latter appears in two different forms. The consequences of using either of these are discussed. It is concluded that no study design so far is appropriate for measuring the total risk of spontaneous abortion from early...... conception to the end of the 27th week. It is proposed that pregnancy may be considered to consist of two or three specific periods and that different study designs should concentrate on measuring the conditional risk within each period. A careful estimate using this principle leads to an estimate of total...

  12. Direct Measurement of Tree Height Provides Different Results on the Assessment of LiDAR Accuracy

    Directory of Open Access Journals (Sweden)

    Emanuele Sibona

    2016-12-01

    Full Text Available In this study, airborne laser scanning-based and traditional field-based survey methods for tree heights estimation are assessed by using one hundred felled trees as a reference dataset. Comparisons between remote sensing and field-based methods were applied to four circular permanent plots located in the western Italian Alps and established within the Alpine Space project NewFor. Remote sensing (Airborne Laser Scanning, ALS, traditional field-based (indirect measurement, IND, and direct measurement of felled trees (DIR methods were compared by using summary statistics, linear regression models, and variation partitioning. Our results show that tree height estimates by Airborne Laser Scanning (ALS approximated to real heights (DIR of felled trees. Considering the species separately, Larix decidua was the species that showed the smaller mean absolute difference (0.95 m between remote sensing (ALS and direct field (DIR data, followed by Picea abies and Pinus sylvestris (1.13 m and 1.04 m, respectively. Our results cannot be generalized to ALS surveys with low pulses density (<5/m2 and with view angles far from zero (nadir. We observed that the tree heights estimation by laser scanner is closer to actual tree heights (DIR than traditional field-based survey, and this was particularly valid for tall trees with conical shape crowns.

  13. Improved Forest Biomass and Carbon Estimations Using Texture Measures from WorldView-2 Satellite Data

    Directory of Open Access Journals (Sweden)

    Sandra Eckert

    2012-03-01

    Full Text Available Accurate estimation of aboveground biomass and carbon stock has gained importance in the context of the United Nations Framework Convention on Climate Change (UNFCCC and the Kyoto Protocol. In order to develop improved forest stratum–specific aboveground biomass and carbon estimation models for humid rainforest in northeast Madagascar, this study analyzed texture measures derived from WorldView-2 satellite data. A forest inventory was conducted to develop stratum-specific allometric equations for dry biomass. On this basis, carbon was calculated by applying a conversion factor. After satellite data preprocessing, vegetation indices, principal components, and texture measures were calculated. The strength of their relationships with the stratum-specific plot data was analyzed using Pearson’s correlation. Biomass and carbon estimation models were developed by performing stepwise multiple linear regression. Pearson’s correlation coefficients revealed that (a texture measures correlated more with biomass and carbon than spectral parameters, and (b correlations were stronger for degraded forest than for non-degraded forest. For degraded forest, the texture measures of Correlation, Angular Second Moment, and Contrast, derived from the red band, contributed to the best estimation model, which explained 84% of the variability in the field data (relative RMSE = 6.8%. For non-degraded forest, the vegetation index EVI and the texture measures of Variance, Mean, and Correlation, derived from the newly introduced coastal blue band, both NIR bands, and the red band, contributed to the best model, which explained 81% of the variability in the field data (relative RMSE = 11.8%. These results indicate that estimation of tropical rainforest biomass/carbon, based on very high resolution satellite data, can be improved by (a developing and applying forest stratum–specific models, and (b including textural information in addition to spectral information.

  14. A Review of Sea State Estimation Procedures Based on Measured Vessel Responses

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2016-01-01

    for shipboard SSE using measured vessel responses, resembling the concept of traditional wave rider buoys. Moreover, newly developed ideas for shipboard sea state estimation are introduced. The presented material is all based on the author’s personal experience, developed within extensive work on the subject......The operation of ships requires careful monitoring of therelated costs while, at the same time, ensuring a high level of safety. A ship’s performance with respect to safety and fuel efficiency may be compromised by the encountered waves. Consequently, it is important to estimate the surrounding...

  15. Training anesthesiology residents in providing anesthesia for awake craniotomy: learning curves and estimate of needed case load.

    Science.gov (United States)

    Bilotta, Federico; Titi, Luca; Lanni, Fabiana; Stazi, Elisabetta; Rosa, Giovanni

    2013-08-01

    To measure the learning curves of residents in anesthesiology in providing anesthesia for awake craniotomy, and to estimate the case load needed to achieve a "good-excellent" level of competence. Prospective study. Operating room of a university hospital. 7 volunteer residents in anesthesiology. Residents underwent a dedicated training program of clinical characteristics of anesthesia for awake craniotomy. The program was divided into three tasks: local anesthesia, sedation-analgesia, and intraoperative hemodynamic management. The learning curve for each resident for each task was recorded over 10 procedures. Quantitative assessment of the individual's ability was based on the resident's self-assessment score and the attending anesthesiologist's judgment, and rated by modified 12 mm Likert scale, reported ability score visual analog scale (VAS). This ability VAS score ranged from 1 to 12 (ie, very poor, mild, moderate, sufficient, good, excellent). The number of requests for advice also was recorded (ie, resident requests for practical help and theoretical notions to accomplish the procedures). Each task had a specific learning rate; the number of procedures necessary to achieve "good-excellent" ability with confidence, as determined by the recorded results, were 10 procedures for local anesthesia, 15 to 25 procedures for sedation-analgesia, and 20 to 30 procedures for intraoperative hemodynamic management. Awake craniotomy is an approach used increasingly in neuroanesthesia. A dedicated training program based on learning specific tasks and building confidence with essential features provides "good-excellent" ability. © 2013 Elsevier Inc. All rights reserved.

  16. Sound Power Estimation by Laser Doppler Vibration Measurement Techniques

    Directory of Open Access Journals (Sweden)

    G.M. Revel

    1998-01-01

    Full Text Available The aim of this paper is to propose simple and quick methods for the determination of the sound power emitted by a vibrating surface, by using non-contact vibration measurement techniques. In order to calculate the acoustic power by vibration data processing, two different approaches are presented. The first is based on the method proposed in the Standard ISO/TR 7849, while the second is based on the superposition theorem. A laser-Doppler scanning vibrometer has been employed for vibration measurements. Laser techniques open up new possibilities in this field because of their high spatial resolution and their non-intrusivity. The technique has been applied here to estimate the acoustic power emitted by a loudspeaker diaphragm. Results have been compared with those from a commercial Boundary Element Method (BEM software and experimentally validated by acoustic intensity measurements. Predicted and experimental results seem to be in agreement (differences lower than 1 dB thus showing that the proposed techniques can be employed as rapid solutions for many practical and industrial applications. Uncertainty sources are addressed and their effect is discussed.

  17. Neutron H*(10) estimation and measurements around 18MV linac.

    Science.gov (United States)

    Cerón Ramírez, Pablo Víctor; Díaz Góngora, José Antonio Irán; Paredes Gutiérrez, Lydia Concepción; Rivera Montalvo, Teodoro; Vega Carrillo, Héctor René

    2016-11-01

    Thermoluminescent dosimetry, analytical techniques and Monte Carlo calculations were used to estimate the dose of neutron radiation in a treatment room with a linear electron accelerator of 18MV. Measurements were carried out through neutron ambient dose monitors which include pairs of thermoluminescent dosimeters TLD 600 ( 6 LiF: Mg, Ti) and TLD 700 ( 7 LiF: Mg, Ti), which were placed inside a paraffin spheres. The measurements has allowed to use NCRP 151 equations, these expressions are useful to find relevant dosimetric quantities. In addition, photoneutrons produced by linac head were calculated through MCNPX code taking into account the geometry and composition of the linac head principal parts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Estimation and empirical properties of a firm-year measure of accounting conservatism

    OpenAIRE

    Khan, Mozaffar Nayim; Watts, Ross Leslie

    2009-01-01

    We estimate a firm-year measure of accounting conservatism, examine its empirical properties as a metric, and illustrate applications by testing new hypotheses that shed further light on the nature and effects of conservatism. The results are consistent with the measure, C_Score, capturing variation in conservatism and also predicting asymmetric earnings timeliness at horizons of up to three years ahead. Cross-sectional hypothesis tests suggest firms with longer investment cycles, higher idio...

  19. The Euler equation with habits and measurement errors: Estimates on Russian micro data

    Directory of Open Access Journals (Sweden)

    Khvostova Irina

    2016-01-01

    Full Text Available This paper presents estimates of the consumption Euler equation for Russia. The estimation is based on micro-level panel data and accounts for the heterogeneity of agents’ preferences and measurement errors. The presence of multiplicative habits is checked using the Lagrange multiplier (LM test in a generalized method of moments (GMM framework. We obtain estimates of the elasticity of intertemporal substitution and of the subjective discount factor, which are consistent with the theoretical model and can be used for the calibration and the Bayesian estimation of dynamic stochastic general equilibrium (DSGE models for the Russian economy. We also show that the effects of habit formation are not significant. The hypotheses of multiplicative habits (external, internal, and both external and internal are not supported by the data.

  20. Measurements of the UVR protection provided by hats used at school.

    Science.gov (United States)

    Gies, Peter; Javorniczky, John; Roy, Colin; Henderson, Stuart

    2006-01-01

    The importance of protection against solar ultraviolet radiation (UVR) in childhood has lead to SunSmart policies at Australian schools, in particular primary schools, where children are encouraged and in many cases required to wear hats at school. Hat styles change regularly and the UVR protection provided by some of the hat types currently used and recommended for sun protection by the various Australian state cancer councils had not been previously evaluated. The UVR protection of the hats was measured using UVR sensitive polysulphone film badges attached to different facial sites on rotating headforms. The sun protection type hats included in this study were broad-brimmed hats, "bucket hats" and legionnaires hats. Baseball caps, which are very popular, were also included. The broad-brimmed hats and bucket hats provided the most UVR protection for the six different sites about the face and head. Legionnaires hats also provided satisfactory UVR protection, but the caps did not provide UVR protection to many of the facial sites. The highest measured UVR protection factors for facial sites other than the forehead were 8 to 10, indicating that, while some hats can be effective, they need to be used in combination with other forms of UVR protection.

  1. Connecting Satellite-Based Precipitation Estimates to Users

    Science.gov (United States)

    Huffman, George J.; Bolvin, David T.; Nelkin, Eric

    2018-01-01

    Beginning in 1997, the Merged Precipitation Group at NASA Goddard has distributed gridded global precipitation products built by combining satellite and surface gauge data. This started with the Global Precipitation Climatology Project (GPCP), then the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA), and recently the Integrated Multi-satellitE Retrievals for the Global Precipitation Measurement (GPM) mission (IMERG). This 20+-year (and on-going) activity has yielded an important set of insights and lessons learned for making state-of-the-art precipitation data accessible to the diverse communities of users. Merged-data products critically depend on the input sensors and the retrieval algorithms providing accurate, reliable estimates, but it is also important to provide ancillary information that helps users determine suitability for their application. We typically provide fields of estimated random error, and recently reintroduced the quality index concept at user request. Also at user request we have added a (diagnostic) field of estimated precipitation phase. Over time, increasingly more ancillary fields have been introduced for intermediate products that give expert users insight into the detailed performance of the combination algorithm, such as individual merged microwave and microwave-calibrated infrared estimates, the contributing microwave sensor types, and the relative influence of the infrared estimate.

  2. Estimation of aerosol particle number distribution with Kalman Filtering – Part 2: Simultaneous use of DMPS, APS and nephelometer measurements

    Directory of Open Access Journals (Sweden)

    T. Viskari

    2012-12-01

    Full Text Available Extended Kalman Filter (EKF is used to estimate particle size distributions from observations. The focus here is on the practical application of EKF to simultaneously merge information from different types of experimental instruments. Every 10 min, the prior state estimate is updated with size-segregating measurements from Differential Mobility Particle Sizer (DMPS and Aerodynamic Particle Sizer (APS as well as integrating measurements from a nephelometer. Error covariances are approximate in our EKF implementation. The observation operator assumes a constant particle density and refractive index. The state estimates are compared to particle size distributions that are a composite of DMPS and APS measurements. The impact of each instrument on the size distribution estimate is studied. Kalman Filtering of DMPS and APS yielded a temporally consistent state estimate. This state estimate is continuous over the overlapping size range of DMPS and APS. Inclusion of the integrating measurements further reduces the effect of measurement noise. Even with the present approximations, EKF is shown to be a very promising method to estimate particle size distribution with observations from different types of instruments.

  3. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    Science.gov (United States)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  4. Permeability estimation from NMR diffusion measurements in reservoir rocks.

    Science.gov (United States)

    Balzarini, M; Brancolini, A; Gossenberg, P

    1998-01-01

    It is well known that in restricted geometries, such as in porous media, the apparent diffusion coefficient (D) of the fluid depends on the observation time. From the time dependence of D, interesting information can be derived to characterise geometrical features of the porous media that are relevant in oil industry applications. In particular, the permeability can be related to the surface-to-volume ratio (S/V), estimated from the short time behaviour of D(t), and to the connectivity of the pore space, which is probed by the long time behaviour of D(t). The stimulated spin-echo pulse sequence, with pulsed magnetic field gradients, has been used to measure the diffusion coefficients on various homogeneous and heterogeneous sandstone samples. It is shown that the petrophysical parameters obtained by our measurements are in good agreement with those yielded by conventional laboratory techniques (gas permeability and electrical conductivity). Although the diffusing time is limited by T1, eventually preventing an observation of the real asymptotic behaviour, and the surface-to-volume ratio measured by nuclear magnetic resonance is different from the value obtained by BET because of the different length scales probed, the measurement remains reliable and low-time consuming.

  5. Estimation of Uncertainty in Aerosol Concentration Measured by Aerosol Sampling System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Chan; Song, Yong Jae; Jung, Woo Young; Lee, Hyun Chul; Kim, Gyu Tae; Lee, Doo Yong [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    FNC Technology Co., Ltd has been developed test facilities for the aerosol generation, mixing, sampling and measurement under high pressure and high temperature conditions. The aerosol generation system is connected to the aerosol mixing system which injects SiO{sub 2}/ethanol mixture. In the sampling system, glass fiber membrane filter has been used to measure average mass concentration. Based on the experimental results using main carrier gas of steam and air mixture, the uncertainty estimation of the sampled aerosol concentration was performed by applying Gaussian error propagation law. FNC Technology Co., Ltd. has been developed the experimental facilities for the aerosol measurement under high pressure and high temperature. The purpose of the tests is to develop commercial test module for aerosol generation, mixing and sampling system applicable to environmental industry and safety related system in nuclear power plant. For the uncertainty calculation of aerosol concentration, the value of the sampled aerosol concentration is not measured directly, but must be calculated from other quantities. The uncertainty of the sampled aerosol concentration is a function of flow rates of air and steam, sampled mass, sampling time, condensed steam mass and its absolute errors. These variables propagate to the combination of variables in the function. Using operating parameters and its single errors from the aerosol test cases performed at FNC, the uncertainty of aerosol concentration evaluated by Gaussian error propagation law is less than 1%. The results of uncertainty estimation in the aerosol sampling system will be utilized for the system performance data.

  6. An Implementation of Error Minimization Position Estimate in Wireless Inertial Measurement Unit using Modification ZUPT

    Directory of Open Access Journals (Sweden)

    Adytia Darmawan

    2016-12-01

    Full Text Available Position estimation using WIMU (Wireless Inertial Measurement Unit is one of emerging technology in the field of indoor positioning systems. WIMU can detect movement and does not depend on GPS signals. The position is then estimated using a modified ZUPT (Zero Velocity Update method that was using Filter Magnitude Acceleration (FMA, Variance Magnitude Acceleration (VMA and Angular Rate (AR estimation. Performance of this method was justified on a six-legged robot navigation system. Experimental result shows that the combination of VMA-AR gives the best position estimation.

  7. Comparative study of speed estimators with highly noisy measurement signals for Wind Energy Generation Systems

    Energy Technology Data Exchange (ETDEWEB)

    Carranza, O. [Escuela Superior de Computo, Instituto Politecnico Nacional, Av. Juan de Dios Batiz S/N, Col. Lindavista, Del. Gustavo A. Madero 7738, D.F. (Mexico); Figueres, E.; Garcera, G. [Grupo de Sistemas Electronicos Industriales, Departamento de Ingenieria Electronica, Universidad Politecnica de Valencia, Camino de Vera S/N, 7F, 46020 Valencia (Spain); Gonzalez, L.G. [Departamento de Ingenieria Electronica, Universidad de los Andes, Merida (Venezuela)

    2011-03-15

    This paper presents a comparative study of several speed estimators to implement a sensorless speed control loop in Wind Energy Generation Systems driven by power factor correction three-phase boost rectifiers. This rectifier topology reduces the low frequency harmonics contents of the generator currents and, consequently, the generator power factor approaches unity whereas undesired vibrations of the mechanical system decrease. For implementation of the speed estimators, the compared techniques start from the measurement of electrical variables like currents and voltages, which contain low frequency harmonics of the fundamental frequency of the wind generator, as well as switching frequency components due to the boost rectifier. In this noisy environment it has been analyzed the performance of the following estimation techniques: Synchronous Reference Frame Phase Locked Loop, speed reconstruction by measuring the dc current and voltage of the rectifier and speed estimation by means of both an Extended Kalman Filter and a Linear Kalman Filter. (author)

  8. Estimation of road profile variability from measured vehicle responses

    Science.gov (United States)

    Fauriat, W.; Mattrand, C.; Gayton, N.; Beakou, A.; Cembrzynski, T.

    2016-05-01

    When assessing the statistical variability of fatigue loads acting throughout the life of a vehicle, the question of the variability of road roughness naturally arises, as both quantities are strongly related. For car manufacturers, gathering information on the environment in which vehicles evolve is a long and costly but necessary process to adapt their products to durability requirements. In the present paper, a data processing algorithm is proposed in order to estimate the road profiles covered by a given vehicle, from the dynamic responses measured on this vehicle. The algorithm based on Kalman filtering theory aims at solving a so-called inverse problem, in a stochastic framework. It is validated using experimental data obtained from simulations and real measurements. The proposed method is subsequently applied to extract valuable statistical information on road roughness from an existing load characterisation campaign carried out by Renault within one of its markets.

  9. Uncertainty in CH4 and N2O emission estimates from a managed fen meadow using EC measurements

    International Nuclear Information System (INIS)

    Kroon, P.S.; Hensen, A.; Van 't Veen, W.H.; Vermeulen, A.T.; Jonker, H.

    2009-02-01

    The overall uncertainty in annual flux estimates derived from chamber measurements may be as high as 50% due to the temporal and spatial variability in the fluxes. As even a large number of chamber plots still cover typically less than 1% of the total field area, the field-scale integrated emission necessarily remains a matter of speculation. High frequency micrometeorological methods are a good option for obtaining integrated estimates on a hectare scale with a continuous coverage in time. Instrumentation is now becoming available that meets the requirements for CH4 and N2O eddy covariance (EC) measurements. A system consisting of a quantum cascade laser (QCL) spectrometer and a sonic anemometer has recently been proven to be suitable for performing EC measurements. This study analyses the EC flux measurements of CH4 and N2O and its corrections, like calibration, Webb-correction, and corrections for high and low frequency losses, and assesses the magnitude of the uncertainties associated with the precision of the measurement instruments, measurement set-up and the methodology. The uncertainty of one single EC flux measurement, a daily, monthly and 3-monthly average EC flux is estimated. In addition, the cumulative emission of C-CH4 and N-N2O and their uncertainties are determined over several fertilizing events at a dairy farm site in the Netherlands. These fertilizing events are selected from the continuously EC flux measurements from August 2006 to September 2008. The EC flux uncertainties are compared by the overall uncertainty in annual flux estimates derived from chamber measurements. It will be shown that EC flux measurements can decrease the overall uncertainty in annual flux estimates

  10. Uncertainty in CH4 and N2O emission estimates from a managed fen meadow using EC measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kroon, P.S.; Hensen, A.; Van ' t Veen, W.H.; Vermeulen, A.T. [ECN Biomass, Coal and Environment, Petten (Netherlands); Jonker, H. [Delft University of Technology, Delft (Netherlands)

    2009-02-15

    The overall uncertainty in annual flux estimates derived from chamber measurements may be as high as 50% due to the temporal and spatial variability in the fluxes. As even a large number of chamber plots still cover typically less than 1% of the total field area, the field-scale integrated emission necessarily remains a matter of speculation. High frequency micrometeorological methods are a good option for obtaining integrated estimates on a hectare scale with a continuous coverage in time. Instrumentation is now becoming available that meets the requirements for CH4 and N2O eddy covariance (EC) measurements. A system consisting of a quantum cascade laser (QCL) spectrometer and a sonic anemometer has recently been proven to be suitable for performing EC measurements. This study analyses the EC flux measurements of CH4 and N2O and its corrections, like calibration, Webb-correction, and corrections for high and low frequency losses, and assesses the magnitude of the uncertainties associated with the precision of the measurement instruments, measurement set-up and the methodology. The uncertainty of one single EC flux measurement, a daily, monthly and 3-monthly average EC flux is estimated. In addition, the cumulative emission of C-CH4 and N-N2O and their uncertainties are determined over several fertilizing events at a dairy farm site in the Netherlands. These fertilizing events are selected from the continuously EC flux measurements from August 2006 to September 2008. The EC flux uncertainties are compared by the overall uncertainty in annual flux estimates derived from chamber measurements. It will be shown that EC flux measurements can decrease the overall uncertainty in annual flux estimates.

  11. A Comparison of Two Measures of HIV Diversity in Multi-Assay Algorithms for HIV Incidence Estimation

    Science.gov (United States)

    Cousins, Matthew M.; Konikoff, Jacob; Sabin, Devin; Khaki, Leila; Longosz, Andrew F.; Laeyendecker, Oliver; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Kobin, Beryl A.; Wheeler, Darrell; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Brookmeyer, Ron; Eshleman, Susan H.

    2014-01-01

    Background Multi-assay algorithms (MAAs) can be used to estimate HIV incidence in cross-sectional surveys. We compared the performance of two MAAs that use HIV diversity as one of four biomarkers for analysis of HIV incidence. Methods Both MAAs included two serologic assays (LAg-Avidity assay and BioRad-Avidity assay), HIV viral load, and an HIV diversity assay. HIV diversity was quantified using either a high resolution melting (HRM) diversity assay that does not require HIV sequencing (HRM score for a 239 base pair env region) or sequence ambiguity (the percentage of ambiguous bases in a 1,302 base pair pol region). Samples were classified as MAA positive (likely from individuals with recent HIV infection) if they met the criteria for all of the assays in the MAA. The following performance characteristics were assessed: (1) the proportion of samples classified as MAA positive as a function of duration of infection, (2) the mean window period, (3) the shadow (the time period before sample collection that is being assessed by the MAA), and (4) the accuracy of cross-sectional incidence estimates for three cohort studies. Results The proportion of samples classified as MAA positive as a function of duration of infection was nearly identical for the two MAAs. The mean window period was 141 days for the HRM-based MAA and 131 days for the sequence ambiguity-based MAA. The shadows for both MAAs were cross-sectional HIV incidence estimates that were very similar to longitudinal incidence estimates based on HIV seroconversion. Conclusions MAAs that include the LAg-Avidity assay, the BioRad-Avidity assay, HIV viral load, and HIV diversity can provide accurate HIV incidence estimates. Sequence ambiguity measures obtained using a commercially-available HIV genotyping system can be used as an alternative to HRM scores in MAAs for cross-sectional HIV incidence estimation. PMID:24968135

  12. Setting the light conditions for measuring root transparency for age-at-death estimation methods.

    Science.gov (United States)

    Adserias-Garriga, Joe; Nogué-Navarro, Laia; Zapico, Sara C; Ubelaker, Douglas H

    2018-03-01

    Age-at-death estimation is one of the main goals in forensic identification, being an essential parameter to determine the biological profile, narrowing the possibility of identification in cases involving missing persons and unidentified bodies. The study of dental tissues has been long considered as a proper tool for age estimation with several age estimation methods based on them. Dental age estimation methods can be divided into three categories: tooth formation and development, post-formation changes, and histological changes. While tooth formation and growth changes are important for fetal and infant consideration, when the end of dental and skeletal growth is achieved, post-formation or biochemical changes can be applied. Lamendin et al. in J Forensic Sci 37:1373-1379, (1992) developed an adult age estimation method based on root transparency and periodontal recession. The regression formula demonstrated its accuracy of use for 40 to 70-year-old individuals. Later on, Prince and Ubelaker in J Forensic Sci 47(1):107-116, (2002) evaluated the effects of ancestry and sex and incorporated root height into the equation, developing four new regression formulas for males and females of African and European ancestry. Even though root transparency is a key element in the method, the conditions for measuring this element have not been established. The aim of the present study is to set the light conditions measured in lumens that offer greater accuracy when applying the Lamendin et al. method modified by Prince and Ubelaker. The results must be also taken into account in the application of other age estimation methodologies using root transparency to estimate age-at-death.

  13. Measuring what's missing

    DEFF Research Database (Denmark)

    Jones, Edward Samuel

    2016-01-01

    mass. Using the UNDP's Human Development Index, the empirical performance of such coverage metrics are compared to alternative measures of convergence. The former are advantageous -- they provide probabilistic estimates of simulation coverage and permit calculation of strict bounds on estimates...

  14. Estimation of signal intensity for online measurement X-ray pinhole camera

    International Nuclear Information System (INIS)

    Dong Jianjun; Liu Shenye; Yang Guohong; Yu Yanning

    2009-01-01

    The signal intensity was estimated for on-line measurement X-ray pinhole camera with CCD as measurement equipment. The X-ray signal intensity counts after the attenuation of thickness-varied Be filters and different material flat mirrors respectively were estimated using the energy spectrum of certain laser prototype and the quantum efficiency curve of PI-SX1300 CCD camera. The calculated results indicate that Be filters no thicker than 200 μm can only reduce signal intensity by one order of magnitude, and so can Au flat mirror with 3 degree incident angle, Ni, C and Si flat mirrors with 5 degree incident angle,but the signal intensity counts for both attenuation methods are beyond the saturation counts of the CCD camera. We also calculated the attenuation of signal intensity for different thickness Be filters combined with flat mirrors, indicates that the combination of Be filters with the thickness between 20 and 40 μm and Au flat mirror with 3 degree incident angle or Ni flat mirror with 5 degree incident angle is a good choice for the attenuation of signal intensity. (authors)

  15. Water storage change estimation from in situ shrinkage measurements of clay soils

    NARCIS (Netherlands)

    Brake, te B.; Ploeg, van der M.J.; Rooij, de G.H.

    2012-01-01

    Water storage in the unsaturated zone is a major determinant of the hydrological behaviour of the soil, but methods to quantify soil water storage are limited. The objective of this study is to assess the applicability of clay soil surface elevation change measurements to estimate soil water storage

  16. The Total Deviation Index estimated by Tolerance Intervals to evaluate the concordance of measurement devices

    Directory of Open Access Journals (Sweden)

    Ascaso Carlos

    2010-04-01

    Full Text Available Abstract Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.

  17. Comparing computing formulas for estimating concentration ratios

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Simpson, J.C.

    1984-03-01

    This paper provides guidance on the choice of computing formulas (estimators) for estimating concentration ratios and other ratio-type measures of radionuclides and other environmental contaminant transfers between ecosystem components. Mathematical expressions for the expected value of three commonly used estimators (arithmetic mean of ratios, geometric mean of ratios, and the ratio of means) are obtained when the multivariate lognormal distribution is assumed. These expressions are used to explain why these estimators will not in general give the same estimate of the average concentration ratio. They illustrate that the magnitude of the discrepancies depends on the magnitude of measurement biases, and on the variances and correlations associated with spatial heterogeneity and measurement errors. This paper also reports on a computer simulation study that compares the accuracy of eight computing formulas for estimating a ratio relationship that is constant over time and/or space. Statistical models appropriate for both controlled spiking experiments and observational field studies for either normal or lognormal distributions are considered. 24 references, 15 figures, 7 tables

  18. A continuous-time adaptive particle filter for estimations under measurement time uncertainties with an application to a plasma-leucine mixed effects model.

    Science.gov (United States)

    Krengel, Annette; Hauth, Jan; Taskinen, Marja-Riitta; Adiels, Martin; Jirstrand, Mats

    2013-01-19

    When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example

  19. Comparison of Measured and Estimated CT Organ Doses for Modulated and Fixed Tube Current:: A Human Cadaver Study.

    Science.gov (United States)

    Padole, Atul; Deedar Ali Khawaja, Ranish; Otrakji, Alexi; Zhang, Da; Liu, Bob; Xu, X George; Kalra, Mannudeep K

    2016-05-01

    The aim of this study was to compare the directly measured and the estimated computed tomography (CT) organ doses obtained from commercial radiation dose-tracking (RDT) software for CT performed with modulated tube current or automatic exposure control (AEC) technique and fixed tube current (mAs). With the institutional review board (IRB) approval, the ionization chambers were surgically implanted in a human cadaver (88 years old, male, 68 kg) in six locations such as liver, stomach, colon, left kidney, small intestine, and urinary bladder. The cadaver was scanned with routine abdomen pelvis protocol on a 128-slice, dual-source multidetector computed tomography (MDCT) scanner using both AEC and fixed mAs. The effective and quality reference mAs of 100, 200, and 300 were used for AEC and fixed mAs, respectively. Scanning was repeated three times for each setting, and measured and estimated organ doses (from RDT software) were recorded (N = 3*3*2 = 18). Mean CTDIvol for AEC and fixed mAs were 4, 8, 13 mGy and 7, 14, 21 mGy, respectively. The most estimated organ doses were significantly greater (P < 0.01) than the measured organ doses for both AEC and fixed mAs. At AEC, the mean estimated organ doses (for six organs) were 14.7 mGy compared to mean measured organ doses of 12.3 mGy. Similarly, at fixed mAs, the mean estimated organ doses (for six organs) were 24 mGy compared to measured organ doses of 22.3 mGy. The differences among the measured and estimated organ doses were higher for AEC technique compared to the fixed mAs for most organs (P < 0.01). The most CT organ doses estimated from RDT software are greater compared to directly measured organ doses, particularly when AEC technique is used for CT scanning. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  20. Facing a Problem of Electrical Energy Quality in Ship Networks-measurements, Estimation, Control

    Institute of Scientific and Technical Information of China (English)

    Tomasz Tarasiuk; Janusz Mindykowski; Xiaoyan Xu

    2003-01-01

    In this paper, electrical energy quality and its indices in ship electric networks are introduced, especially the meaning of electrical energy quality terms in voltage and active and reactive power distribution indices. Then methods of measurement of marine electrical energy indices are introduced in details and a microprocessor measurement-diagnosis system with the function of measurement and control is designed. Afterwards, estimation and control of electrical power quality of marine electrical power networks are introduced. And finally, according to the existing method of measurement and control of electrical power quality in ship power networks, the improvement of relative method is proposed.

  1. The Effectiveness of Using Limited Gauge Measurements for Bias Adjustment of Satellite-Based Precipitation Estimation over Saudi Arabia

    Science.gov (United States)

    Alharbi, Raied; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2018-01-01

    Precipitation is a key input variable for hydrological and climate studies. Rain gauges are capable of providing reliable precipitation measurements at point scale. However, the uncertainty of rain measurements increases when the rain gauge network is sparse. Satellite -based precipitation estimations appear to be an alternative source of precipitation measurements, but they are influenced by systematic bias. In this study, a method for removing the bias from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) over a region where the rain gauge is sparse is investigated. The method consists of monthly empirical quantile mapping, climate classification, and inverse-weighted distance method. Daily PERSIANN-CCS is selected to test the capability of the method for removing the bias over Saudi Arabia during the period of 2010 to 2016. The first six years (2010 - 2015) are calibrated years and 2016 is used for validation. The results show that the yearly correlation coefficient was enhanced by 12%, the yearly mean bias was reduced by 93% during validated year. Root mean square error was reduced by 73% during validated year. The correlation coefficient, the mean bias, and the root mean square error show that the proposed method removes the bias on PERSIANN-CCS effectively that the method can be applied to other regions where the rain gauge network is sparse.

  2. iPad-assisted measurements of duration estimation in psychiatric patients and healthy control subjects.

    Directory of Open Access Journals (Sweden)

    Irene Preuschoff

    Full Text Available Handheld devices with touchscreen controls have become widespread in the general population. In this study, we examined the duration estimates (explicit timing made by patients in a major general hospital and healthy control subjects using a custom iPad application. We methodically assessed duration estimates using this novel device. We found that both psychiatric and non-psychiatric patients significantly overestimated time periods compared with healthy control subjects, who estimated elapsed time very precisely. The use of touchscreen-based methodologies can provide valuable information about patients.

  3. Medium change based image estimation from application of inverse algorithms to coda wave measurements

    Science.gov (United States)

    Zhan, Hanyu; Jiang, Hanwan; Jiang, Ruinian

    2018-03-01

    Perturbations worked as extra scatters will cause coda waveform distortions; thus, coda wave with long propagation time and traveling path are sensitive to micro-defects in strongly heterogeneous media such as concretes. In this paper, we conduct varied external loads on a life-size concrete slab which contains multiple existing micro-cracks, and a couple of sources and receivers are installed to collect coda wave signals. The waveform decorrelation coefficients (DC) at different loads are calculated for all available source-receiver pair measurements. Then inversions of the DC results are applied to estimate the associated distribution density values in three-dimensional regions through kernel sensitivity model and least-square algorithms, which leads to the images indicating the micro-cracks positions. This work provides an efficiently non-destructive approach to detect internal defects and damages of large-size concrete structures.

  4. Comparison of a mobile application to estimate percentage body fat to other non-laboratory based measurements

    Directory of Open Access Journals (Sweden)

    Shaw Matthew P.

    2017-02-01

    Full Text Available Study aim: The measurement of body composition is important from a population perspective as it is a variable associated with a person’s health, and also from a sporting perspective as it can be used to evaluate training. This study aimed to examine the reliability of a mobile application that estimates body composition by digitising a two-dimensional image. Materials and methods: Thirty participants (15 men and 15 women volunteered to have their percentage body fat (%BF estimated via three different methods (skinfold measurements, SFM; bio-electrical impedance, BIA; LeanScreenTM mobile application, LSA. Intra-method reproducibility was assessed using intra-class correlation coefficients (ICC, coefficient of variance (CV and typical error of measurement (TEM. The average measurement for each method were also compared. Results: There were no significant differences between the methods for estimated %BF (p = 0.818 and the reliability of each method as assessed via ICC was good (≥0.974. However the absolute reproducibility, as measured by CV and TEM, was much higher in SFM and BIA (≤1.07 and ≤0.37 respectively compared with LSA (CV 6.47, TEM 1.6. Conclusion: LSA may offer an alternative to other field-based measures for practitioners, however individual variance should be considered to develop an understanding of minimal worthwhile change, as it may not be suitable for a one-off measurement.

  5. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    Science.gov (United States)

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  6. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  7. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  8. Measuring self-control problems: a structural estimation

    NARCIS (Netherlands)

    Bucciol, A.

    2012-01-01

    We adopt a two-stage Method of Simulated Moments to estimate the preference parameters in a life-cycle consumption-saving model augmented with temptation disutility. Our approach estimates the parameters from the comparison between simulated moments with empirical moments observed in the US Survey

  9. Influence of temporally variable groundwater flow conditions on point measurements and contaminant mass flux estimations

    DEFF Research Database (Denmark)

    Rein, Arno; Bauer, S; Dietrich, P

    2009-01-01

    Monitoring of contaminant concentrations, e.g., for the estimation of mass discharge or contaminant degradation rates. often is based on point measurements at observation wells. In addition to the problem, that point measurements may not be spatially representative. a further complication may ari...

  10. Digital photography provides a fast, reliable, and noninvasive method to estimate anthocyanin pigment concentration in reproductive and vegetative plant tissues.

    Science.gov (United States)

    Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo

    2018-03-01

    Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the

  11. Atmospheric Turbulence Estimates from a Pulsed Lidar

    Science.gov (United States)

    Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.

    2013-01-01

    Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.

  12. Estimations of natural variability between satellite measurements of trace species concentrations

    Science.gov (United States)

    Sheese, P.; Walker, K. A.; Boone, C. D.; Degenstein, D. A.; Kolonjari, F.; Plummer, D. A.; von Clarmann, T.

    2017-12-01

    In order to validate satellite measurements of atmospheric states, it is necessary to understand the range of random and systematic errors inherent in the measurements. On occasions where the measurements do not agree within those errors, a common "go-to" explanation is that the unexplained difference can be chalked up to "natural variability". However, the expected natural variability is often left ambiguous and rarely quantified. This study will look to quantify the expected natural variability of both O3 and NO2 between two satellite instruments: ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) and OSIRIS (Optical Spectrograph and Infrared Imaging System). By sampling the CMAM30 (30-year specified dynamics simulation of the Canadian Middle Atmosphere Model) climate chemistry model throughout the upper troposphere and stratosphere at times and geolocations of coincident ACE-FTS and OSIRIS measurements at varying coincidence criteria, height-dependent expected values of O3 and NO2 variability will be estimated and reported on. The results could also be used to better optimize the coincidence criteria used in satellite measurement validation studies.

  13. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    Science.gov (United States)

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  14. A Robust WLS Power System State Estimation Method Integrating a Wide-Area Measurement System and SCADA Technology

    Directory of Open Access Journals (Sweden)

    Tao Jin

    2015-04-01

    Full Text Available With the development of modern society, the scale of the power system is rapidly increased accordingly, and the framework and mode of running of power systems are trending towards more complexity. It is nowadays much more important for the dispatchers to know exactly the state parameters of the power network through state estimation. This paper proposes a robust power system WLS state estimation method integrating a wide-area measurement system (WAMS and SCADA technology, incorporating phasor measurements and the results of the traditional state estimator in a post-processing estimator, which greatly reduces the scale of the non-linear estimation problem as well as the number of iterations and the processing time per iteration. This paper firstly analyzes the wide-area state estimation model in detail, then according to the issue that least squares does not account for bad data and outliers, the paper proposes a robust weighted least squares (WLS method that combines a robust estimation principle with least squares by equivalent weight. The performance assessment is discussed through setting up mathematical models of the distribution network. The effectiveness of the proposed method was proved to be accurate and reliable by simulations and experiments.

  15. Bayesian estimation of isotopic age differences

    International Nuclear Information System (INIS)

    Curl, R.L.

    1988-01-01

    Isotopic dating is subject to uncertainties arising from counting statistics and experimental errors. These uncertainties are additive when an isotopic age difference is calculated. If large, they can lead to no significant age difference by classical statistics. In many cases, relative ages are known because of stratigraphic order or other clues. Such information can be used to establish a Bayes estimate of age difference which will include prior knowledge of age order. Age measurement errors are assumed to be log-normal and a noninformative but constrained bivariate prior for two true ages in known order is adopted. True-age ratio is distributed as a truncated log-normal variate. Its expected value gives an age-ratio estimate, and its variance provides credible intervals. Bayesian estimates of ages are different and in correct order even if measured ages are identical or reversed in order. For example, age measurements on two samples might both yield 100 ka with coefficients of variation of 0.2. Bayesian estimates are 22.7 ka for age difference with a 75% credible interval of [4.4, 43.7] ka

  16. Spacecraft Trajectory Estimation Using a Sampled-Data Extended Kalman Filter with Range-Only Measurements

    National Research Council Canada - National Science Library

    Erwin, R. S; Bernstein, Dennis S

    2005-01-01

    .... In this paper we use a sampled-data extended Kalman Filter to estimate the trajectory or a target satellite when only range measurements are available from a constellation or orbiting spacecraft...

  17. On the effect of correlated measurements on the performance of distributed estimation

    KAUST Repository

    Ahmed, Mohammed

    2013-06-01

    We address the distributed estimation of an unknown scalar parameter in Wireless Sensor Networks (WSNs). Sensor nodes transmit their noisy observations over multiple access channel to a Fusion Center (FC) that reconstructs the source parameter. The received signal is corrupted by noise and channel fading, so that the FC objective is to minimize the Mean-Square Error (MSE) of the estimate. In this paper, we assume sensor node observations to be correlated with the source signal and correlated with each other as well. The correlation coefficient between two observations is exponentially decaying with the distance separation. The effect of the distance-based correlation on the estimation quality is demonstrated and compared with the case of unity correlated observations. Moreover, a closed-form expression for the outage probability is derived and its dependency on the correlation coefficients is investigated. Numerical simulations are provided to verify our analytic results. © 2013 IEEE.

  18. An Experimental Study of Energy Consumption in Buildings Providing Ancillary Services

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yashen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Afshari, Sina [University of Michigan; Wolfe, John [University of Michigan; Nazir, Md Salman [University of Michigan; Hiskens, Ian A. [University of Michigan; Johnson, Jeremiah X. [University of Michigan; Mathieu, Johanna L. [University of Michigan; Barnes, Arthur K. [Los Alamos National Laboratory; Geller, Drew A. [Los Alamos National Laboratory; Backhaus, Scott N. [Los Alamos National Laboratory

    2017-10-03

    Heating, ventilation, and air conditioning (HVAC) systems in commercial buildings can provide ancillary services (AS) to the power grid, but by providing AS their energy consumption may increase. This inefficiency is evaluated using round-trip efficiency (RTE), which is defined as the ratio between the decrease and the increase in the HVAC system's energy consumption compared to the baseline consumption as a result of providing AS. This paper evaluates the RTE of a 30,000 m2 commercial building providing AS. We propose two methods to estimate the HVAC system's settling time after an AS event based on temperature and the air flow measurements from the building. Experimental data gathered over a 4-month period are used to calculate the RTE for AS signals of various waveforms, magnitudes, durations, and polarities. The results indicate that the settling time estimation algorithm based on the air flow measurements obtains more accurate results compared to the temperature-based algorithm. Further, we study the impact of the AS signal shape parameters on the RTE and discuss the practical implications of our findings.

  19. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  20. Estimation of daily global solar irradiation by coupling ground measurements of bright sunshine hours to satellite imagery

    International Nuclear Information System (INIS)

    Ener Rusen, Selmin; Hammer, Annette; Akinoglu, Bulent G.

    2013-01-01

    In this work, the current version of the satellite-based HELIOSAT method and ground-based linear Ångström–Prescott type relations are used in combination. The first approach is based on the use of a correlation between daily bright sunshine hours (s) and cloud index (n). In the second approach a new correlation is proposed between daily solar irradiation and daily data of s and n which is based on a physical parameterization. The performances of the proposed two combined models are tested against conventional methods. We test the use of obtained correlation coefficients for nearby locations. Our results show that the use of sunshine duration together with the cloud index is quite satisfactory in the estimation of daily horizontal global solar irradiation. We propose to use the new approaches to estimate daily global irradiation when the bright sunshine hours data is available for the location of interest, provided that some regression coefficients are determined using the data of a nearby station. In addition, if surface data for a close location does not exist then it is recommended to use satellite models like HELIOSAT or the new approaches instead the Ångström type models. - Highlights: • Satellite imagery together with surface measurements in solar radiation estimation. • The new coupled and conventional models (satellite and ground-based) are analyzed. • New models result in highly accurate estimation of daily global solar irradiation

  1. Can i just check...? Effects of edit check questions on measurement error and survey estimates

    NARCIS (Netherlands)

    Lugtig, Peter; Jäckle, Annette

    2014-01-01

    Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to

  2. Method for Estimating Evaporative Potential (IM/CLO) from ASTM Standard Single Wind Velocity Measures

    Science.gov (United States)

    2016-08-10

    IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY MEASURES DISCLAIMER The opinions or assertions contained herein are the private views of the...USARIEM TECHNICAL REPORT T16-14 METHOD FOR ESTIMATING EVAPORATIVE POTENTIAL (IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY... ASTM STANDARD SINGLE WIND VELOCITY MEASURES Adam W. Potter Biophysics and Biomedical Modeling Division U.S. Army Research Institute of Environmental

  3. A Scale Elasticity Measure for Directional Distance Function and its Dual: Theory and DEA Estimation

    OpenAIRE

    Valentin Zelenyuk

    2012-01-01

    In this paper we focus on scale elasticity measure based on directional distance function for multi-output-multi-input technologies, explore its fundamental properties and show its equivalence with the input oriented and output oriented scale elasticity measures. We also establish duality relationship between the scale elasticity measure based on the directional distance function with scale elasticity measure based on the profit function. Finally, we discuss the estimation issues of the scale...

  4. Creatinine measurements often yielded false estimates of progression in chronic renal failure

    International Nuclear Information System (INIS)

    Walser, M.; Drew, H.H.; LaFrance, N.D.

    1988-01-01

    In 9 of 22 observation periods (lasting an average of 15 months) in 17 patients with moderate to severe chronic renal failure (GFR 4 to 23 ml/min), rates of progression as estimated from the linear regression on time of 24-hour creatinine clearance (b1) differed significantly from rates of progression as estimated from the regression on time of urinary clearance of 99mTc-DTPA (b2), during all or part of the period of observation. b1 exceeded b2 in four cases and was less than b2 in the other five. Thus there were gradual changes in the fractional tubular secretion of creatinine in individual patients, in both directions. Owing to these changes, measurements of creatinine clearance gave erroneous impressions of the rate or existence of progression during all or a portion of the period of observation in nearly half of these patients. In the 22 studies as a group, using the entire periods of observation, b1 indicated significantly more rapid progression (by 0.18 +/- 0.06 ml/min/month, P less than 0.01) than did b2, and had a significantly greater variance. Measurements of progression based on the rate of change of reciprocal plasma creatinine (multiplied by an average rate of urinary creatinine excretion in each study) were equally misleading, even though less variable. We conclude that sequential creatinine measurements are often misleading as measures of progression and should, when feasible, be replaced by urinary clearance of isotopes in following patients with chronic renal failure

  5. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    Science.gov (United States)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  6. Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example

    KAUST Repository

    Allmaras, Moritz

    2013-02-07

    All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework for parameter estimation in which uncertainties about models and measurements are translated into uncertainties in estimates of parameters. This paper provides a simple, step-by-step example-starting from a physical experiment and going through all of the mathematics-to explain the use of Bayesian techniques for estimating the coefficients of gravity and air friction in the equations describing a falling body. In the experiment we dropped an object from a known height and recorded the free fall using a video camera. The video recording was analyzed frame by frame to obtain the distance the body had fallen as a function of time, including measures of uncertainty in our data that we describe as probability densities. We explain the decisions behind the various choices of probability distributions and relate them to observed phenomena. Our measured data are then combined with a mathematical model of a falling body to obtain probability densities on the space of parameters we seek to estimate. We interpret these results and discuss sources of errors in our estimation procedure. © 2013 Society for Industrial and Applied Mathematics.

  7. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  8. A mathematical method for verifying the validity of measured information about the flows of energy resources based on the state estimation theory

    Science.gov (United States)

    Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.

    2015-11-01

    Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate

  9. Measurement of natural radionuclides in Malaysian bottled mineral water and consequent health risk estimation

    Science.gov (United States)

    Priharti, W.; Samat, S. B.; Yasir, M. S.

    2015-09-01

    The radionuclides of 226Ra, 232Th and 40K were measured in ten mineral water samples, of which from the radioactivity obtained, the ingestion doses for infants, children and adults were calculated and the cancer risk for the adult was estimated. Results showed that the calculated ingestion doses for the three age categories are much lower than the average worldwide ingestion exposure of 0.29 mSv/y and the estimated cancer risk is much lower than the cancer risk of 8.40 × 10-3 (estimated from the total natural radiation dose of 2.40 mSv/y). The present study concludes that the bottled mineral water produced in Malaysia is safe for daily human consumption.

  10. Iterative image reconstruction for positron emission tomography based on a detector response function estimated from point source measurements

    International Nuclear Information System (INIS)

    Tohme, Michel S; Qi Jinyi

    2009-01-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of a sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can easily be applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3 x 3 line phantom, an ultra-micro resolution phantom and a 22 Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  11. Estimate of Radiosonde Dry Bias From Far-Infrared Measurements on the Antarctic Plateau

    Science.gov (United States)

    Rizzi, R.; Maestri, T.; Arosio, C.

    2018-03-01

    The experimental data set of downwelling radiance spectra measured at the ground in clear conditions during 2013 by a Far-Infrared Fourier Transform Spectrometer at Dome-C, Antarctica, presented in Rizzi et al. (2016, https://doi.org/10.1002/2016JD025341) is used to estimate the effect of solar heating of the radiosonde humidity sensor, called dry bias. The effect is quite evident comparing residuals for the austral summer and winter clear cases and can be modeled by an increase of the water vapor concentration at all levels by about 15%. Such an estimate has become possible only after a new version of the simulation code and spectroscopic data has become available, which has substantially improved the modeling of water vapor absorption in the far infrared. The negative yearly spectral bias reported in Rizzi et al. (2016, https://doi.org/10.1002/2016JD025341) is in fact greatly reduced when compared to the same measurement data set.

  12. Inertial Measurement Units-Based Probe Vehicles: Automatic Calibration, Trajectory Estimation, and Context Detection

    KAUST Repository

    Mousa, Mustafa

    2017-12-06

    Most probe vehicle data is generated using satellite navigation systems, such as the Global Positioning System (GPS), Globalnaya navigatsionnaya sputnikovaya Sistema (GLONASS), or Galileo systems. However, because of their high cost, relatively high position uncertainty in cities, and low sampling rate, a large quantity of satellite positioning data is required to estimate traffic conditions accurately. To address this issue, we introduce a new type of traffic monitoring system based on inexpensive inertial measurement units (IMUs) as probe sensors. IMUs as traffic probes pose unique challenges in that they need to be precisely calibrated, do not generate absolute position measurements, and their position estimates are subject to accumulating errors. In this paper, we address each of these challenges and demonstrate that the IMUs can reliably be used as traffic probes. After discussing the sensing technique, we present an implementation of this system using a custom-designed hardware platform, and validate the system with experimental data.

  13. Inertial Measurement Units-Based Probe Vehicles: Automatic Calibration, Trajectory Estimation, and Context Detection

    KAUST Repository

    Mousa, Mustafa; Sharma, Kapil; Claudel, Christian G.

    2017-01-01

    Most probe vehicle data is generated using satellite navigation systems, such as the Global Positioning System (GPS), Globalnaya navigatsionnaya sputnikovaya Sistema (GLONASS), or Galileo systems. However, because of their high cost, relatively high position uncertainty in cities, and low sampling rate, a large quantity of satellite positioning data is required to estimate traffic conditions accurately. To address this issue, we introduce a new type of traffic monitoring system based on inexpensive inertial measurement units (IMUs) as probe sensors. IMUs as traffic probes pose unique challenges in that they need to be precisely calibrated, do not generate absolute position measurements, and their position estimates are subject to accumulating errors. In this paper, we address each of these challenges and demonstrate that the IMUs can reliably be used as traffic probes. After discussing the sensing technique, we present an implementation of this system using a custom-designed hardware platform, and validate the system with experimental data.

  14. Aquifer Recharge Estimation In Unsaturated Porous Rock Using Darcian And Geophysical Methods.

    Science.gov (United States)

    Nimmo, J. R.; De Carlo, L.; Masciale, R.; Turturro, A. C.; Perkins, K. S.; Caputo, M. C.

    2016-12-01

    Within the unsaturated zone a constant downward gravity-driven flux of water commonly exists at depths ranging from a few meters to tens of meters depending on climate, medium, and vegetation. In this case a steady-state application of Darcy's law can provide recharge rate estimates.We have applied an integrated approach that combines field geophysical measurements with laboratory hydraulic property measurements on core samples to produce accurate estimates of steady-state aquifer recharge, or, in cases where episodic recharge also occurs, the steady component of recharge. The method requires (1) measurement of the water content existing in the deep unsaturated zone at the location of a core sample retrieved for lab measurements, and (2) measurement of the core sample's unsaturated hydraulic conductivity over a range of water content that includes the value measured in situ. Both types of measurements must be done with high accuracy. Darcy's law applied with the measured unsaturated hydraulic conductivity and gravitational driving force provides recharge estimates.Aquifer recharge was estimated using Darcian and geophysical methods at a deep porous rock (calcarenite) experimental site in Canosa, southern Italy. Electrical Resistivity Tomography (ERT) and Vertical Electrical Sounding (VES) profiles were collected from the land surface to water table to provide data for Darcian recharge estimation. Volumetric water content was estimated from resistivity profiles using a laboratory-derived calibration function based on Archie's law for rock samples from the experimental site, where electrical conductivity of the rock was related to the porosity and water saturation. Multiple-depth core samples were evaluated using the Quasi-Steady Centrifuge (QSC) method to obtain hydraulic conductivity (K), matric potential (ψ), and water content (θ) estimates within this profile. Laboratory-determined unsaturated hydraulic conductivity ranged from 3.90 x 10-9 to 1.02 x 10-5 m

  15. Measuring the quality of provided services for patients with chronic kidney disease.

    Science.gov (United States)

    Bahadori, Mohammadkarim; Raadabadi, Mehdi; Heidari Jamebozorgi, Majid; Salesi, Mahmood; Ravangard, Ramin

    2014-09-01

    The healthcare organizations need to develop and implement quality improvement plans for their survival and success. Measuring quality in the healthcare competitive environment is an undeniable necessity for these organizations and will lead to improved patient satisfaction. This study aimed to measure the quality of provided services for patients with chronic kidney disease in Kerman in 2014. This cross-sectional, descriptive-analytic study was performed from 23 January 2014 to 14 February 2014 in four hemodialysis centers in Kerman. All of the patients on chronic hemodialysis (n = 195) who were referred to these four centers were selected and studied using census method. The required data were collected using the SERVQUAL questionnaire, consisting of two parts: questions related to the patients' demographic characteristics, and 28 items to measure the patients' expectations and perceptions of the five dimensions of service quality, including tangibility, reliability, responsiveness, assurance, and empathy. The collected data were analyzed using SPSS 21.0 through some statistical tests, including independent-samples t test, one-way ANOVA, and paired-samples t test. The results showed that the means of patients' expectations were more than their perceptions of the quality of provided services in all dimensions, which indicated that there were gaps in all dimensions. The highest and lowest means of negative gaps were related to empathy (-0.52 ± 0.48) and tangibility (-0.29 ± 0.51). In addition, among the studied patients' demographic characteristics and the five dimensions of service quality, only the difference between the patients' income levels and the gap in assurance were statistically significant (P expectations of patients on hemodialysis were more than their perceptions of provided services. The healthcare providers and employees should pay more attention to the patients' opinions and comments and use their feedback to solve the workplace problems and

  16. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  17. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    International Nuclear Information System (INIS)

    Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu

    2017-01-01

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  18. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)

    2017-03-05

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  19. Combining hazard, exposure and social vulnerability to provide lessons for flood risk management

    NARCIS (Netherlands)

    Koks, E.E.; Jongman, B.; Husby, T.G.; Botzen, W.J.W.

    2015-01-01

    Flood risk assessments provide inputs for the evaluation of flood risk management (FRM) strategies. Traditionally, such risk assessments provide estimates of loss of life and economic damage. However, the effect of policy measures aimed at reducing risk also depends on the capacity of households to

  20. Target Tracking in 3-D Using Estimation Based Nonlinear Control Laws for UAVs

    Directory of Open Access Journals (Sweden)

    Mousumi Ahmed

    2016-02-01

    Full Text Available This paper presents an estimation based backstepping like control law design for an Unmanned Aerial Vehicle (UAV to track a moving target in 3-D space. A ground-based sensor or an onboard seeker antenna provides range, azimuth angle, and elevation angle measurements to a chaser UAV that implements an extended Kalman filter (EKF to estimate the full state of the target. A nonlinear controller then utilizes this estimated target state and the chaser’s state to provide speed, flight path, and course/heading angle commands to the chaser UAV. Tracking performance with respect to measurement uncertainty is evaluated for three cases: (1 stationary white noise; (2 stationary colored noise and (3 non-stationary (range correlated white noise. Furthermore, in an effort to improve tracking performance, the measurement model is made more realistic by taking into consideration range-dependent uncertainties in the measurements, i.e., as the chaser closes in on the target, measurement uncertainties are reduced in the EKF, thus providing the UAV with more accurate control commands. Simulation results for these cases are shown to illustrate target state estimation and trajectory tracking performance.

  1. Combining tracer flux ratio methodology with low-flying aircraft measurements to estimate dairy farm CH4 emissions

    Science.gov (United States)

    Daube, C.; Conley, S.; Faloona, I. C.; Yacovitch, T. I.; Roscioli, J. R.; Morris, M.; Curry, J.; Arndt, C.; Herndon, S. C.

    2017-12-01

    Livestock activity, enteric fermentation of feed and anaerobic digestion of waste, contributes significantly to the methane budget of the United States (EPA, 2016). Studies question the reported magnitude of these methane sources (Miller et. al., 2013), calling for more detailed research of agricultural animals (Hristov, 2014). Tracer flux ratio is an attractive experimental method to bring to this problem because it does not rely on estimates of atmospheric dispersion. Collection of data occurred during one week at two dairy farms in central California (June, 2016). Each farm varied in size, layout, head count, and general operation. The tracer flux ratio method involves releasing ethane on-site with a known flow rate to serve as a tracer gas. Downwind mixed enhancements in ethane (from the tracer) and methane (from the dairy) were measured, and their ratio used to infer the unknown methane emission rate from the farm. An instrumented van drove transects downwind of each farm on public roads while tracer gases were released on-site, employing the tracer flux ratio methodology to assess simultaneous methane and tracer gas plumes. Flying circles around each farm, a small instrumented aircraft made measurements to perform a mass balance evaluation of methane gas. In the course of these two different methane quantification techniques, we were able to validate yet a third method: tracer flux ratio measured via aircraft. Ground-based tracer release rates were applied to the aircraft-observed methane-to-ethane ratios, yielding whole-site methane emission rates. Never before has the tracer flux ratio method been executed with aircraft measurements. Estimates from this new application closely resemble results from the standard ground-based technique to within their respective uncertainties. Incorporating this new dimension to the tracer flux ratio methodology provides additional context for local plume dynamics and validation of both ground and flight-based data.

  2. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  3. A Fast Multimodal Ectopic Beat Detection Method Applied for Blood Pressure Estimation Based on Pulse Wave Velocity Measurements in Wearable Sensors.

    Science.gov (United States)

    Pflugradt, Maik; Geissdoerfer, Kai; Goernig, Matthias; Orglmeister, Reinhold

    2017-01-14

    Automatic detection of ectopic beats has become a thoroughly researched topic, with literature providing manifold proposals typically incorporating morphological analysis of the electrocardiogram (ECG). Although being well understood, its utilization is often neglected, especially in practical monitoring situations like online evaluation of signals acquired in wearable sensors. Continuous blood pressure estimation based on pulse wave velocity considerations is a prominent example, which depends on careful fiducial point extraction and is therefore seriously affected during periods of increased occurring extrasystoles. In the scope of this work, a novel ectopic beat discriminator with low computational complexity has been developed, which takes advantage of multimodal features derived from ECG and pulse wave relating measurements, thereby providing additional information on the underlying cardiac activity. Moreover, the blood pressure estimations' vulnerability towards ectopic beats is closely examined on records drawn from the Physionet database as well as signals recorded in a small field study conducted in a geriatric facility for the elderly. It turns out that a reliable extrasystole identification is essential to unsupervised blood pressure estimation, having a significant impact on the overall accuracy. The proposed method further convinces by its applicability to battery driven hardware systems with limited processing power and is a favorable choice when access to multimodal signal features is given anyway.

  4. Pursuing atmospheric water vapor retrieval through NDSA measurements between two LEO satellites: evaluation of estimation errors in spectral sensitivity measurements

    Science.gov (United States)

    Facheris, L.; Cuccoli, F.; Argenti, F.

    2008-10-01

    NDSA (Normalized Differential Spectral Absorption) is a novel differential measurement method to estimate the total content of water vapor (IWV, Integrated Water Vapor) along a tropospheric propagation path between two Low Earth Orbit (LEO) satellites. A transmitter onboard the first LEO satellite and a receiver onboard the second one are required. The NDSA approach is based on the simultaneous estimate of the total attenuations at two relatively close frequencies in the Ku/K bands and of a "spectral sensitivity parameter" that can be directly converted into IWV. The spectral sensitivity has the potential to emphasize the water vapor contribution, to cancel out all spectrally flat unwanted contributions and to limit the impairments due to tropospheric scintillation. Based on a previous Monte Carlo simulation approach, through which we analyzed the measurement accuracy of the spectral sensitivity parameter at three different and complementary frequencies, in this work we examine such accuracy for a particularly critical atmospheric status as simulated through the pressure, temperature and water vapor profiles measured by a high resolution radiosonde. We confirm the validity of an approximate expression of the accuracy and discuss the problems that may arise when tropospheric water vapor concentration is lower than expected.

  5. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  6. Water storage change estimation from in situ shrinkage measurements of clay soils

    Directory of Open Access Journals (Sweden)

    B. te Brake

    2013-05-01

    Full Text Available The objective of this study is to assess the applicability of clay soil elevation change measurements to estimate soil water storage changes, using a simplified approach. We measured moisture contents in aggregates by EC-5 sensors, and in multiple aggregate and inter-aggregate spaces (bulk soil by CS616 sensors. In a long dry period, the assumption of constant isotropic shrinkage proved invalid and a soil moisture dependant geometry factor was applied. The relative overestimation made by assuming constant isotropic shrinkage in the linear (basic shrinkage phase was 26.4% (17.5 mm for the actively shrinking layer between 0 and 60 cm. Aggregate-scale water storage and volume change revealed a linear relation for layers ≥ 30 cm depth. The range of basic shrinkage in the bulk soil was limited by delayed drying of deep soil layers, and maximum water loss in the structural shrinkage phase was 40% of total water loss in the 0–60 cm layer, and over 60% in deeper layers. In the dry period, fitted slopes of the ΔV–ΔW relationship ranged from 0.41 to 0.56 (EC-5 and 0.42 to 0.55 (CS616. Under a dynamic drying and wetting regime, slopes ranged from 0.21 to 0.38 (EC-5 and 0.22 to 0.36 (CS616. Alternating shrinkage and incomplete swelling resulted in limited volume change relative to water storage change. The slope of the ΔV–ΔW relationship depended on the drying regime, measurement scale and combined effect of different soil layers. Therefore, solely relying on surface level elevation changes to infer soil water storage changes will lead to large underestimations. Recent and future developments might provide a basis for application of shrinkage relations to field situations, but in situ observations will be required to do so.

  7. Spectral estimates of net radiation and soil heat flux

    International Nuclear Information System (INIS)

    Daughtry, C.S.T.; Kustas, W.P.; Moran, M.S.; Pinter, P.J. Jr.; Jackson, R.D.; Brown, P.W.; Nichols, W.D.; Gay, L.W.

    1990-01-01

    Conventional methods of measuring surface energy balance are point measurements and represent only a small area. Remote sensing offers a potential means of measuring outgoing fluxes over large areas at the spatial resolution of the sensor. The objective of this study was to estimate net radiation (Rn) and soil heat flux (G) using remotely sensed multispectral data acquired from an aircraft over large agricultural fields. Ground-based instruments measured Rn and G at nine locations along the flight lines. Incoming fluxes were also measured by ground-based instruments. Outgoing fluxes were estimated using remotely sensed data. Remote Rn, estimated as the algebraic sum of incoming and outgoing fluxes, slightly underestimated Rn measured by the ground-based net radiometers. The mean absolute errors for remote Rn minus measured Rn were less than 7%. Remote G, estimated as a function of a spectral vegetation index and remote Rn, slightly overestimated measured G; however, the mean absolute error for remote G was 13%. Some of the differences between measured and remote values of Rn and G are associated with differences in instrument designs and measurement techniques. The root mean square error for available energy (Rn - G) was 12%. Thus, methods using both ground-based and remotely sensed data can provide reliable estimates of the available energy which can be partitioned into sensible and latent heat under non advective conditions

  8. Estimating air emissions from a remediation of a petroleum sump using direct measurement and modeling

    International Nuclear Information System (INIS)

    Schmidt, C.E.

    1991-01-01

    A technical approach was developed for the remediation of a petroleum sump near a residential neighborhood. The approach evolved around sludge handling/in-situ solidification and on-site disposal. As part of the development of the engineering approach, a field investigation and modeling program was conducted to predict air emissions from the proposed remediation. Field measurements using the EPA recommended surface isolation flux chamber were conducted to represent each major activity or air exposure involving waste at the site. Air emissions from freshly disturbed petroleum waste, along with engineering estimates were used to predict emissions from each phase of the engineering approach. This paper presents the remedial approach and the measurement/modeling technologies used to predict air toxic emissions from the remediation. Emphasis will be placed on the measurement approaches used in obtaining the emission rate data and the assumptions used in the modeling to estimate emissions from engineering scenarios

  9. Measurement of natural radionuclides in Malaysian bottled mineral water and consequent health risk estimation

    Energy Technology Data Exchange (ETDEWEB)

    Priharti, W.; Samat, S. B.; Yasir, M. S. [School of Applied Physics, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia)

    2015-09-25

    The radionuclides of {sup 226}Ra, {sup 232}Th and {sup 40}K were measured in ten mineral water samples, of which from the radioactivity obtained, the ingestion doses for infants, children and adults were calculated and the cancer risk for the adult was estimated. Results showed that the calculated ingestion doses for the three age categories are much lower than the average worldwide ingestion exposure of 0.29 mSv/y and the estimated cancer risk is much lower than the cancer risk of 8.40 × 10{sup −3} (estimated from the total natural radiation dose of 2.40 mSv/y). The present study concludes that the bottled mineral water produced in Malaysia is safe for daily human consumption.

  10. Quantum process estimation via generic two-body correlations

    International Nuclear Information System (INIS)

    Mohseni, M.; Rezakhani, A. T.; Barreiro, J. T.; Kwiat, P. G.; Aspuru-Guzik, A.

    2010-01-01

    Performance of quantum process estimation is naturally limited by fundamental, random, and systematic imperfections of preparations and measurements. These imperfections may lead to considerable errors in the process reconstruction because standard data-analysis techniques usually presume ideal devices. Here, by utilizing generic auxiliary quantum or classical correlations, we provide a framework for the estimation of quantum dynamics via a single measurement apparatus. By construction, this approach can be applied to quantum tomography schemes with calibrated faulty-state generators and analyzers. Specifically, we present a generalization of the work begun by M. Mohseni and D. A. Lidar [Phys. Rev. Lett. 97, 170501 (2006)] with an imperfect Bell-state analyzer. We demonstrate that for several physically relevant noisy preparations and measurements, classical correlations and a small data-processing overhead suffice to accomplish the full system identification. Furthermore, we provide the optimal input states whereby the error amplification due to inversion of the measurement data is minimal.

  11. Formulation of uncertainty relation of error and disturbance in quantum measurement by using quantum estimation theory

    International Nuclear Information System (INIS)

    Yu Watanabe; Masahito Ueda

    2012-01-01

    Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the

  12. Estimation of lean and fat composition of pork ham using image processing measurements

    Science.gov (United States)

    Jia, Jiancheng; Schinckel, Allan P.; Forrest, John C.

    1995-01-01

    This paper presents a method of estimating the lean and fat composition in pork ham from cross-sectional area measurements using image processing technology. The relationship between the quantity of ham lean and fat mass with the ham lean and fat areas was studied. The prediction equations for pork ham composition based on the ham cross-sectional area measurements were developed. The results show that ham lean weight was related to the ham lean area (r equals .75, P lean weight was highly related to the product of ham total weight times percentage ham lean area (r equals .96, P product of ham total weight times percentage ham fat area (r equals .88, P lean weight was trimmed wholesale ham weight and percentage ham fat area with a coefficient of determination of 92%. The best combination of independent variables for estimating ham fat weight was trimmed wholesale ham weight and percentage ham fat area with a coefficient of determination of 78%. Prediction equations with either two or three independent variables did not significantly increase the accuracy of prediction. The results of this study indicate that the weight of ham lean and fat could be predicted from ham cross-sectional area measurements using image analysis in combination with wholesale ham weight.

  13. Methodology for generating waste volume estimates

    International Nuclear Information System (INIS)

    Miller, J.Q.; Hale, T.; Miller, D.

    1991-09-01

    This document describes the methodology that will be used to calculate waste volume estimates for site characterization and remedial design/remedial action activities at each of the DOE Field Office, Oak Ridge (DOE-OR) facilities. This standardized methodology is designed to ensure consistency in waste estimating across the various sites and organizations that are involved in environmental restoration activities. The criteria and assumptions that are provided for generating these waste estimates will be implemented across all DOE-OR facilities and are subject to change based on comments received and actual waste volumes measured during future sampling and remediation activities. 7 figs., 8 tabs

  14. Development of a low-maintenance measurement approach to continuously estimate methane emissions: A case study.

    Science.gov (United States)

    Riddick, S N; Hancock, B R; Robinson, A D; Connors, S; Davies, S; Allen, G; Pitt, J; Harris, N R P

    2018-03-01

    The chemical breakdown of organic matter in landfills represents a significant source of methane gas (CH 4 ). Current estimates suggest that landfills are responsible for between 3% and 19% of global anthropogenic emissions. The net CH 4 emissions resulting from biogeochemical processes and their modulation by microbes in landfills are poorly constrained by imprecise knowledge of environmental constraints. The uncertainty in absolute CH 4 emissions from landfills is therefore considerable. This study investigates a new method to estimate the temporal variability of CH 4 emissions using meteorological and CH 4 concentration measurements downwind of a landfill site in Suffolk, UK from July to September 2014, taking advantage of the statistics that such a measurement approach offers versus shorter-term, but more complex and instantaneously accurate, flux snapshots. Methane emissions were calculated from CH 4 concentrations measured 700m from the perimeter of the landfill with observed concentrations ranging from background to 46.4ppm. Using an atmospheric dispersion model, we estimate a mean emission flux of 709μgm -2 s -1 over this period, with a maximum value of 6.21mgm -2 s -1 , reflecting the wide natural variability in biogeochemical and other environmental controls on net site emission. The emissions calculated suggest that meteorological conditions have an influence on the magnitude of CH 4 emissions. We also investigate the factors responsible for the large variability observed in the estimated CH 4 emissions, and suggest that the largest component arises from uncertainty in the spatial distribution of CH 4 emissions within the landfill area. The results determined using the low-maintenance approach discussed in this paper suggest that a network of cheaper, less precise CH 4 sensors could be used to measure a continuous CH 4 emission time series from a landfill site, something that is not practical using far-field approaches such as tracer release methods

  15. Reactor building indoor wireless network channel quality estimation using RSSI measurement of wireless sensor network

    International Nuclear Information System (INIS)

    Merat, S.

    2008-01-01

    Expanding wireless communication network reception inside reactor buildings (RB) and service wings (SW) has always been a technical challenge for operations service team. This is driven by the volume of metal equipment inside the Reactor Buildings (RB) that blocks and somehow shields the signal throughout the link. In this study, to improve wireless reception inside the Reactor Building (RB), an experimental model using indoor localization mesh based on IEEE 802.15 is developed to implement a wireless sensor network. This experimental model estimates the distance between different nodes by measuring the RSSI (Received Signal Strength Indicator). Then by using triangulation and RSSI measurement, the validity of the estimation techniques is verified to simulate the physical environmental obstacles, which block the signal transmission. (author)

  16. Reactor building indoor wireless network channel quality estimation using RSSI measurement of wireless sensor network

    Energy Technology Data Exchange (ETDEWEB)

    Merat, S. [Wardrop Engineering Inc., Toronto, Ontario (Canada)

    2008-07-01

    Expanding wireless communication network reception inside reactor buildings (RB) and service wings (SW) has always been a technical challenge for operations service team. This is driven by the volume of metal equipment inside the Reactor Buildings (RB) that blocks and somehow shields the signal throughout the link. In this study, to improve wireless reception inside the Reactor Building (RB), an experimental model using indoor localization mesh based on IEEE 802.15 is developed to implement a wireless sensor network. This experimental model estimates the distance between different nodes by measuring the RSSI (Received Signal Strength Indicator). Then by using triangulation and RSSI measurement, the validity of the estimation techniques is verified to simulate the physical environmental obstacles, which block the signal transmission. (author)

  17. Contemporary group estimates adjusted for climatic effects provide a finer definition of the unknown environmental challenges experienced by growing pigs.

    Science.gov (United States)

    Guy, S Z Y; Li, L; Thomson, P C; Hermesch, S

    2017-12-01

    Environmental descriptors derived from mean performances of contemporary groups (CGs) are assumed to capture any known and unknown environmental challenges. The objective of this paper was to obtain a finer definition of the unknown challenges, by adjusting CG estimates for the known climatic effects of monthly maximum air temperature (MaxT), minimum air temperature (MinT) and monthly rainfall (Rain). As the unknown component could include infection challenges, these refined descriptors may help to better model varying responses of sire progeny to environmental infection challenges for the definition of disease resilience. Data were recorded from 1999 to 2013 at a piggery in south-east Queensland, Australia (n = 31,230). Firstly, CG estimates of average daily gain (ADG) and backfat (BF) were adjusted for MaxT, MinT and Rain, which were fitted as splines. In the models used to derive CG estimates for ADG, MaxT and MinT were significant variables. The models that contained these significant climatic variables had CG estimates with a lower variance compared to models without significant climatic variables. Variance component estimates were similar across all models, suggesting that these significant climatic variables accounted for some known environmental variation captured in CG estimates. No climatic variables were significant in the models used to derive the CG estimates for BF. These CG estimates were used to categorize environments. There was no observable sire by environment interaction (Sire×E) for ADG when using the environmental descriptors based on CG estimates on BF. For the environmental descriptors based on CG estimates of ADG, there was significant Sire×E only when MinT was included in the model (p = .01). Therefore, this new definition of the environment, preadjusted by MinT, increased the ability to detect Sire×E. While the unknown challenges captured in refined CG estimates need verification for infection challenges, this may provide a

  18. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  19. Considering sampling strategy and cross-section complexity for estimating the uncertainty of discharge measurements using the velocity-area method

    Science.gov (United States)

    Despax, Aurélien; Perret, Christian; Garçon, Rémy; Hauet, Alexandre; Belleville, Arnaud; Le Coz, Jérôme; Favre, Anne-Catherine

    2016-02-01

    Streamflow time series provide baseline data for many hydrological investigations. Errors in the data mainly occur through uncertainty in gauging (measurement uncertainty) and uncertainty in the determination of the stage-discharge relationship based on gaugings (rating curve uncertainty). As the velocity-area method is the measurement technique typically used for gaugings, it is fundamental to estimate its level of uncertainty. Different methods are available in the literature (ISO 748, Q + , IVE), all with their own limitations and drawbacks. Among the terms forming the combined relative uncertainty in measured discharge, the uncertainty component relating to the limited number of verticals often includes a large part of the relative uncertainty. It should therefore be estimated carefully. In ISO 748 standard, proposed values of this uncertainty component only depend on the number of verticals without considering their distribution with respect to the depth and velocity cross-sectional profiles. The Q + method is sensitive to a user-defined parameter while it is questionable whether the IVE method is applicable to stream-gaugings performed with a limited number of verticals. To address the limitations of existing methods, this paper presents a new methodology, called FLow Analog UnceRtainty Estimation (FLAURE), to estimate the uncertainty component relating to the limited number of verticals. High-resolution reference gaugings (with 31 and more verticals) are used to assess the uncertainty component through a statistical analysis. Instead of subsampling purely randomly the verticals of these reference stream-gaugings, a subsampling method is developed in a way that mimicks the behavior of a hydrometric technician. A sampling quality index (SQI) is suggested and appears to be a more explanatory variable than the number of verticals. This index takes into account the spacing between verticals and the variation of unit flow between two verticals. To compute the

  20. What do we measure when we measure cell-associated HIV RNA.

    Science.gov (United States)

    Pasternak, Alexander O; Berkhout, Ben

    2018-01-29

    Cell-associated (CA) HIV RNA has received much attention in recent years as a surrogate measure of the efficiency of HIV latency reversion and because it may provide an estimate of the viral reservoir size. This review provides an update on some recent insights in the biology and clinical utility of this biomarker. We discuss a number of important considerations to be taken into account when interpreting CA HIV RNA measurements, as well as different methods to measure this biomarker.

  1. Measured soil water concentrations of cadmium and zinc in plant pots and estimated leaching outflows from contaminated soils

    DEFF Research Database (Denmark)

    Holm, P.E.; Christensen, T.H.

    1998-01-01

    Soil water concentrations of cadmium and zinc were measured in plant pots with 15 contaminated soils which differed in origin, texture, pH (5.1-7.8) and concentrations of cadmium (0.2-17 mg Cd kg(-1)) and zinc (36-1300 mg Zn kg(-1)). The soil waters contained total concentrations of 0.5 to 17 mu g...... to 0.1% per year of the total soil content of cadmium and zinc. The measured soil water concentrations of cadmium and zinc did not correlate linearly with the corresponding soil concentrations but correlated fairly well with concentrations measured in Ca(NO(3))(2) extracts of the soils and with soil...... water concentrations estimated from soil concentrations and pH. Such concentration estimates may be useful for estimating amounts of cadmium and zinc being leached from soils....

  2. Estimating the Uncertainty of Tensile Strength Measurement for A Photocured Material Produced by Additive Manufacturing

    Directory of Open Access Journals (Sweden)

    Adamczak Stanisław

    2014-08-01

    Full Text Available The aim of this study was to estimate the measurement uncertainty for a material produced by additive manufacturing. The material investigated was FullCure 720 photocured resin, which was applied to fabricate tensile specimens with a Connex 350 3D printer based on PolyJet technology. The tensile strength of the specimens established through static tensile testing was used to determine the measurement uncertainty. There is a need for extensive research into the performance of model materials obtained via 3D printing as they have not been studied sufficiently like metal alloys or plastics, the most common structural materials. In this analysis, the measurement uncertainty was estimated using a larger number of samples than usual, i.e., thirty instead of typical ten. The results can be very useful to engineers who design models and finished products using this material. The investigations also show how wide the scatter of results is.

  3. Estimation of the volatility distribution of organic aerosol combining thermodenuder and isothermal dilution measurements

    Directory of Open Access Journals (Sweden)

    E. E. Louvaris

    2017-10-01

    Full Text Available A method is developed following the work of Grieshop et al. (2009 for the determination of the organic aerosol (OA volatility distribution combining thermodenuder (TD and isothermal dilution measurements. The approach was tested in experiments that were conducted in a smog chamber using organic aerosol (OA produced during meat charbroiling. A TD was operated at temperatures ranging from 25 to 250 °C with a 14 s centerline residence time coupled to a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS and a scanning mobility particle sizer (SMPS. In parallel, a dilution chamber filled with clean air was used to dilute isothermally the aerosol of the larger chamber by approximately a factor of 10. The OA mass fraction remaining was measured as a function of temperature in the TD and as a function of time in the isothermal dilution chamber. These two sets of measurements were used together to estimate the volatility distribution of the OA and its effective vaporization enthalpy and accommodation coefficient. In the isothermal dilution experiments approximately 20 % of the OA evaporated within 15 min. Almost all the OA evaporated in the TD at approximately 200 °C. The resulting volatility distributions suggested that around 60–75 % of the cooking OA (COA at concentrations around 500 µg m−3 consisted of low-volatility organic compounds (LVOCs, 20–30 % of semivolatile organic compounds (SVOCs, and around 10 % of intermediate-volatility organic compounds (IVOCs. The estimated effective vaporization enthalpy of COA was 100 ± 20 kJ mol−1 and the effective accommodation coefficient was 0.06–0.07. Addition of the dilution measurements to the TD data results in a lower uncertainty of the estimated vaporization enthalpy as well as the SVOC content of the OA.

  4. Estimation of the volatility distribution of organic aerosol combining thermodenuder and isothermal dilution measurements

    Science.gov (United States)

    Louvaris, Evangelos E.; Karnezi, Eleni; Kostenidou, Evangelia; Kaltsonoudis, Christos; Pandis, Spyros N.

    2017-10-01

    A method is developed following the work of Grieshop et al. (2009) for the determination of the organic aerosol (OA) volatility distribution combining thermodenuder (TD) and isothermal dilution measurements. The approach was tested in experiments that were conducted in a smog chamber using organic aerosol (OA) produced during meat charbroiling. A TD was operated at temperatures ranging from 25 to 250 °C with a 14 s centerline residence time coupled to a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) and a scanning mobility particle sizer (SMPS). In parallel, a dilution chamber filled with clean air was used to dilute isothermally the aerosol of the larger chamber by approximately a factor of 10. The OA mass fraction remaining was measured as a function of temperature in the TD and as a function of time in the isothermal dilution chamber. These two sets of measurements were used together to estimate the volatility distribution of the OA and its effective vaporization enthalpy and accommodation coefficient. In the isothermal dilution experiments approximately 20 % of the OA evaporated within 15 min. Almost all the OA evaporated in the TD at approximately 200 °C. The resulting volatility distributions suggested that around 60-75 % of the cooking OA (COA) at concentrations around 500 µg m-3 consisted of low-volatility organic compounds (LVOCs), 20-30 % of semivolatile organic compounds (SVOCs), and around 10 % of intermediate-volatility organic compounds (IVOCs). The estimated effective vaporization enthalpy of COA was 100 ± 20 kJ mol-1 and the effective accommodation coefficient was 0.06-0.07. Addition of the dilution measurements to the TD data results in a lower uncertainty of the estimated vaporization enthalpy as well as the SVOC content of the OA.

  5. Estimation of (n,f) Cross-Sections by Measuring Reaction Probability Ratios

    Energy Technology Data Exchange (ETDEWEB)

    Plettner, C; Ai, H; Beausang, C W; Bernstein, L A; Ahle, L; Amro, H; Babilon, M; Burke, J T; Caggiano, J A; Casten, R F; Church, J A; Cooper, J R; Crider, B; Gurdal, G; Heinz, A; McCutchan, E A; Moody, K; Punyon, J A; Qian, J; Ressler, J J; Schiller, A; Williams, E; Younes, W

    2005-04-21

    Neutron-induced reaction cross-sections on unstable nuclei are inherently difficult to measure due to target activity and the low intensity of neutron beams. In an alternative approach, named the 'surrogate' technique, one measures the decay probability of the same compound nucleus produced using a stable beam on a stable target to estimate the neutron-induced reaction cross-section. As an extension of the surrogate method, in this paper they introduce a new technique of measuring the fission probabilities of two different compound nuclei as a ratio, which has the advantage of removing most of the systematic uncertainties. This method was benchmarked in this report by measuring the probability of deuteron-induced fission events in coincidence with protons, and forming the ratio P({sup 236}U(d,pf))/P({sup 238}U(d,pf)), which serves as a surrogate for the known cross-section ratio of {sup 236}U(n,f)/{sup 238}U(n,f). IN addition, the P({sup 238}U(d,d{prime}f))/P({sup 236}U(d,d{prime}f)) ratio as a surrogate for the {sup 237}U(n,f)/{sup 235}U(n,f) cross-section ratio was measured for the first time in an unprecedented range of excitation energies.

  6. Continuous estimates of dynamic cerebral autoregulation: influence of non-invasive arterial blood pressure measurements

    International Nuclear Information System (INIS)

    Panerai, R B; Smith, S M; Rathbone, W E; Samani, N J; Sammons, E L; Bentley, S; Potter, J F

    2008-01-01

    Temporal variability of parameters which describe dynamic cerebral autoregulation (CA), usually quantified by the short-term relationship between arterial blood pressure (BP) and cerebral blood flow velocity (CBFV), could result from continuous adjustments in physiological regulatory mechanisms or could be the result of artefacts in methods of measurement, such as the use of non-invasive measurements of BP in the finger. In 27 subjects (61 ± 11 years old) undergoing coronary artery angioplasty, BP was continuously recorded at rest with the Finapres device and in the ascending aorta (Millar catheter, BP AO ), together with bilateral transcranial Doppler ultrasound in the middle cerebral artery, surface ECG and transcutaneous CO 2 . Dynamic CA was expressed by the autoregulation index (ARI), ranging from 0 (absence of CA) to 9 (best CA). Time-varying, continuous estimates of ARI (ARI(t)) were obtained with an autoregressive moving-average (ARMA) model applied to a 60 s sliding data window. No significant differences were observed in the accuracy and precision of ARI(t) between estimates derived from the Finapres and BP AO . Highly significant correlations were obtained between ARI(t) estimates from the right and left middle cerebral artery (MCA) (Finapres r = 0.60 ± 0.20; BP AO r = 0.56 ± 0.22) and also between the ARI(t) estimates from the Finapres and BP AO (right MCA r = 0.70 ± 0.22; left MCA r = 0.74 ± 0.22). Surrogate data showed that ARI(t) was highly sensitive to the presence of noise in the CBFV signal, with both the bias and dispersion of estimates increasing for lower values of ARI(t). This effect could explain the sudden drops of ARI(t) to zero as reported previously. Simulated sudden changes in ARI(t) can be detected by the Finapres, but the bias and variability of estimates also increase for lower values of ARI. In summary, the Finapres does not distort time-varying estimates of dynamic CA obtained with a sliding window combined with an ARMA model

  7. Factors Affecting Estimated Fetal Weight Measured by Ultrasound

    Directory of Open Access Journals (Sweden)

    Hasan Energin

    2016-06-01

    Full Text Available Objective: In this study, we aimed to evaluate the fac­tors that affect the accuracy of estimated fetal weight in ultrasound. Methods: This study was conducted in 3rd degree hospi­tal antenatal outpatient clinic and perinatology inpatient clinic between June 2011 and January 2012. The data were obtained from 165 pregnant women. Inclusion cri­teria were; no additional diseases, giving birth within 48 hours after ultrasound. The same physician executed all ultrasound process. Age, height, weight, obstetric history and obstetric follow –up findings were recorded. Results: Fetal gender, fetal presentation, presence of meconium in amniotic fluid, maternal parity, did not sig­nificantly affect the accuracy of fetal weight estimation by ultrasound. The mean difference between estimated fetal weight and birth weight was 104.48±84 gr in nullipars and 94.2±81 gr in multipars (p=0.44; mean difference was 98.22±79 gr in male babies and 98.15±86 gr in female babies (p=0.99. Mean difference between estimated fetal weight and birth weight was 96.92±81 gr in babies with cephalic presentation and 110.9±90 gr in babies with breech presentation (p=0.53; this difference was 95.36±79 gr in babies with amniotic fluid with meconium and 98.82± 83 gr in babies with amniotic fluid without me­conium (p=0.83. Conclusion: Fetal weight is estimation is one of key points in the obstetrician’s intrapartum managament. And it is important to make fetal weight estimation accurately. In our study, consistent with literature, we observed that fetal gender; meconium presence in amniotic fluid, fetal presentation, maternal parity does not significantly effect the accuracy of fetal weight estimation by ultrasound.

  8. Simultaneous and multi-point measurement of ammonia emanating from human skin surface for the estimation of whole body dermal emission rate.

    Science.gov (United States)

    Furukawa, Shota; Sekine, Yoshika; Kimura, Keita; Umezawa, Kazuo; Asai, Satomi; Miyachi, Hayato

    2017-05-15

    Ammonia is one of the members of odor gases and a possible source of odor in indoor environment. However, little has been known on the actual emission rate of ammonia from the human skin surface. Then, this study aimed to estimate the whole-body dermal emission rate of ammonia by simultaneous and multi-point measurement of emission fluxes of ammonia employing a passive flux sampler - ion chromatography system. Firstly, the emission fluxes of ammonia were non-invasively measured for ten volunteers at 13 sampling positions set in 13 anatomical regions classified by Kurazumi et al. The measured emission fluxes were then converted to partial emission rates using the surface body areas estimated by weights and heights of volunteers and partial rates of 13 body regions. Subsequent summation of the partial emission rates provided the whole body dermal emission rate of ammonia. The results ranged from 2.9 to 12mgh -1 with an average of 5.9±3.2mgh -1 per person for the ten healthy young volunteers. The values were much greater than those from human breath, and thus the dermal emission of ammonia was found more significant odor source than the breath exhalation in indoor environment. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Measurement-based perturbation theory and differential equation parameter estimation with applications to satellite gravimetry

    Science.gov (United States)

    Xu, Peiliang

    2018-06-01

    The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking

  10. Absorbed dose estimates from a single measurement one to three days after the administration of 177Lu-DOTATATE/-TOC.

    Science.gov (United States)

    Hänscheid, Heribert; Lapa, Constantin; Buck, Andreas K; Lassmann, Michael; Werner, Rudolf A

    2017-01-01

    To retrospectively analyze the accuracy of absorbed dose estimates from a single measurement of the activity concentrations in tumors and relevant organs one to three days after the administration of 177 Lu-DOTA-TATE/TOC assuming tissue specific effective half-lives. Activity kinetics in 54 kidneys, 30 neuroendocrine tumor lesions, 25 livers, and 27 spleens were deduced from series of planar images in 29 patients. After adaptation of mono- or bi-exponential fit functions to the measured data, it was analyzed for each fit function how precise the time integral can be estimated from fixed tissue-specific half-lives and a single measurement at 24, 48, or 72 h after the administration. For the kidneys, assuming a fixed tissue-specific half-life of 50 h, the deviations of the estimate from the actual integral were median (5 % percentile, 95 % percentile): -3 °% (-15 %>; +16 °%) for measurements after 24 h, +2 %> (-9 %>; +12 %>) for measurements after 48 h, and 0 % (-2 %; +12 %) for measurements after 72 h. The corresponding values for the other tissues, assuming fixed tissue-specific half-lives of 67 h for liver and spleen and 77 h for tumors, were +2 % (-25 %; +20 %) for measurements after 24 h, +2 °% (-16 %>; +17 %>) for measurements after 48 h, and +2 %> (-11 %>; +10 %>) for measurements after 72 h. Especially for the kidneys, which often represent the dose limiting organ, but also for liver, spleen, and neuroendocrine tumors, a meaningful absorbed dose estimate is possible from a single measurement after 2, more preferably 3 days after the administration of 177 Lu-DOTA-TATE/-TOC assuming fixed tissue specific effective half-lives. Schattauer GmbH.

  11. Estimating Concentrations of Road-Salt Constituents in Highway-Runoff from Measurements of Specific Conductance

    Science.gov (United States)

    Granato, Gregory E.; Smith, Kirk P.

    1999-01-01

    Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for

  12. A practical method of estimating stature of bedridden female nursing home patients.

    Science.gov (United States)

    Muncie, H L; Sobal, J; Hoopes, J M; Tenney, J H; Warren, J W

    1987-04-01

    Accurate measurement of stature is important for the determination of several nutritional indices as well as body surface area (BSA) for the normalization of creatinine clearances. Direct standing measurement of stature of bedridden elderly nursing home patients is impossible, and stature as recorded in the chart may not be valid. An accurate stature obtained by summing five segmental measurements was compared to the stature recorded in the patient's chart and calculated estimates of stature from measurement of a long bone (humerus, tibia, knee height). Estimation of stature from measurement of knee height was highly correlated (r = 0.93) to the segmental measurement of stature while estimates from other long-bone measurements were less highly correlated (r = 0.71 to 0.81). Recorded chart stature was poorly correlated (r = 0.37). Measurement of knee height provides a simple, quick, and accurate means of estimating stature for bedridden females in nursing homes.

  13. Optimal phase estimation with arbitrary a priori knowledge

    International Nuclear Information System (INIS)

    Demkowicz-Dobrzanski, Rafal

    2011-01-01

    The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attention is paid to a natural a priori probability distribution arising from a diffusion process.

  14. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus?

    Directory of Open Access Journals (Sweden)

    Corina J. Logan

    2015-06-01

    Full Text Available There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size, as measured by computerized tomography (CT scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex.

  15. An online Vce measurement and temperature estimation method for high power IGBT module in normal PWM operation

    DEFF Research Database (Denmark)

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon

    2014-01-01

    An on-state collector-emitter voltage (Vce) measurement and thereby an estimation of average temperature in space for high power IGBT module is presented while power converter is in operation. The proposed measurement circuit is able to measure both high and low side IGBT and anti parallel diode...

  16. Estimating the re-identification risk of clinical data sets

    Directory of Open Access Journals (Sweden)

    Dankar Fida

    2012-07-01

    Full Text Available Abstract Background De-identification is a common way to protect patient privacy when disclosing clinical data for secondary purposes, such as research. One type of attack that de-identification protects against is linking the disclosed patient data with public and semi-public registries. Uniqueness is a commonly used measure of re-identification risk under this attack. If uniqueness can be measured accurately then the risk from this kind of attack can be managed. In practice, it is often not possible to measure uniqueness directly, therefore it must be estimated. Methods We evaluated the accuracy of uniqueness estimators on clinically relevant data sets. Four candidate estimators were identified because they were evaluated in the past and found to have good accuracy or because they were new and not evaluated comparatively before: the Zayatz estimator, slide negative binomial estimator, Pitman’s estimator, and mu-argus. A Monte Carlo simulation was performed to evaluate the uniqueness estimators on six clinically relevant data sets. We varied the sampling fraction and the uniqueness in the population (the value being estimated. The median relative error and inter-quartile range of the uniqueness estimates was measured across 1000 runs. Results There was no single estimator that performed well across all of the conditions. We developed a decision rule which selected between the Pitman, slide negative binomial and Zayatz estimators depending on the sampling fraction and the difference between estimates. This decision rule had the best consistent median relative error across multiple conditions and data sets. Conclusion This study identified an accurate decision rule that can be used by health privacy researchers and disclosure control professionals to estimate uniqueness in clinical data sets. The decision rule provides a reliable way to measure re-identification risk.

  17. Estimation of CO2 emissions from fossil fuel burning by using satellite measurements of co-emitted gases: a new method and its application to the European region

    Science.gov (United States)

    Berezin, Evgeny V.; Konovalov, Igor B.; Ciais, Philippe; Broquet, Gregoire

    2014-05-01

    Accurate estimates of emissions of carbon dioxide (CO2), which is a major greenhouse gas, are requisite for understanding of the thermal balance of the atmosphere and for predicting climate change. International and regional CO2 emission inventories are usually compiled by following the 'bottom-up' approach on the basis of available statistical information about fossil fuel consumption. Such information may be rather uncertain, leading to uncertainties in the emission estimates. One of the possible ways to understand and reduce this uncertainty is to use satellite measurements in the framework of the inverse modeling approach; however, information on CO2 emissions, which is currently provided by direct satellite measurements of CO2, remains very limited. The main goal of this study is to develop a CO2 emission estimation method based on using satellite measurements of co-emitted species, such as NOx (represented by NO2 in the satellite measurements) and CO. Due to a short lifetime of NOx and relatively low background concentration of CO, the observed column amounts of NO2 and CO are typically higher over regions with strong emission sources than over remote regions. Therefore, satellite measurements of these species can provide useful information on the spatial distribution and temporal evolution of major emission sources. The method's basic idea (which is similar to the ideas already exploited in the earlier studies [1, 2]) is to combine this information with available estimates of emission factors for all of the species considered. The method assumes optimization of the total CO2 emissions from the two major aggregated sectors of economy. CO2 emission estimates derived from independent satellite measurements of the different species are combined in a probabilistic way by taking into account their uncertainties. The CHIMERE chemistry transport model is used to simulate the relationship between NOx (CO) emissions and NO2 (CO) columns from the OMI (IASI

  18. Summary of groundwater-recharge estimates for Pennsylvania

    Science.gov (United States)

    Stuart O. Reese,; Risser, Dennis W.

    2010-01-01

    Groundwater recharge is water that infiltrates through the subsurface to the zone of saturation beneath the water table. Because recharge is a difficult parameter to quantify, it is typically estimated from measurements of other parameters like streamflow and precipitation. This report provides a general overview of processes affecting recharge in Pennsylvania and presents estimates of recharge rates from studies at various scales.The most common method for estimating recharge in Pennsylvania has been to estimate base flow from measurements of streamflow and assume that base flow (expressed in inches over the basin) approximates recharge. Statewide estimates of mean annual groundwater recharge were developed by relating base flow to basin characteristics of HUC10 watersheds (a fifth-level classification that uses 10 digits to define unique hydrologic units) using a regression equation. The regression analysis indicated that mean annual precipitation, average daily maximum temperature, percent of sand in soil, percent of carbonate rock in the watershed, and average stream-channel slope were significant factors in the explaining the variability of groundwater recharge across the Commonwealth.Several maps are included in this report to illustrate the principal factors affecting recharge and provide additional information about the spatial distribution of recharge in Pennsylvania. The maps portray the patterns of precipitation, temperature, prevailing winds across Pennsylvania’s varied physiography; illustrate the error associated with recharge estimates; and show the spatial variability of recharge as a percent of precipitation. National, statewide, regional, and local values of recharge, based on numerous studies, are compiled to allow comparison of estimates from various sources. Together these plates provide a synopsis of groundwater-recharge estimations and factors in Pennsylvania.Areas that receive the most recharge are typically those that get the most

  19. A nonintrusive temperature measuring system for estimating deep body temperature in bed.

    Science.gov (United States)

    Sim, S Y; Lee, W K; Baek, H J; Park, K S

    2012-01-01

    Deep body temperature is an important indicator that reflects human being's overall physiological states. Existing deep body temperature monitoring systems are too invasive to apply to awake patients for a long time. Therefore, we proposed a nonintrusive deep body temperature measuring system. To estimate deep body temperature nonintrusively, a dual-heat-flux probe and double-sensor probes were embedded in a neck pillow. When a patient uses the neck pillow to rest, the deep body temperature can be assessed using one of the thermometer probes embedded in the neck pillow. We could estimate deep body temperature in 3 different sleep positions. Also, to reduce the initial response time of dual-heat-flux thermometer which measures body temperature in supine position, we employed the curve-fitting method to one subject. And thereby, we could obtain the deep body temperature in a minute. This result shows the possibility that the system can be used as practical temperature monitoring system with appropriate curve-fitting model. In the next study, we would try to establish a general fitting model that can be applied to all of the subjects. In addition, we are planning to extract meaningful health information such as sleep structure analysis from deep body temperature data which are acquired from this system.

  20. Magnetic resonance measurement of turbulent kinetic energy for the estimation of irreversible pressure loss in aortic stenosis.

    Science.gov (United States)

    Dyverfeldt, Petter; Hope, Michael D; Tseng, Elaine E; Saloner, David

    2013-01-01

    The authors sought to measure the turbulent kinetic energy (TKE) in the ascending aorta of patients with aortic stenosis and to assess its relationship to irreversible pressure loss. Irreversible pressure loss caused by energy dissipation in post-stenotic flow is an important determinant of the hemodynamic significance of aortic stenosis. The simplified Bernoulli equation used to estimate pressure gradients often misclassifies the ventricular overload caused by aortic stenosis. The current gold standard for estimation of irreversible pressure loss is catheterization, but this method is rarely used due to its invasiveness. Post-stenotic pressure loss is largely caused by dissipation of turbulent kinetic energy into heat. Recent developments in magnetic resonance flow imaging permit noninvasive estimation of TKE. The study was approved by the local ethics review board and all subjects gave written informed consent. Three-dimensional cine magnetic resonance flow imaging was used to measure TKE in 18 subjects (4 normal volunteers, 14 patients with aortic stenosis with and without dilation). For each subject, the peak total TKE in the ascending aorta was compared with a pressure loss index. The pressure loss index was based on a previously validated theory relating pressure loss to measures obtainable by echocardiography. The total TKE did not appear to be related to global flow patterns visualized based on magnetic resonance-measured velocity fields. The TKE was significantly higher in patients with aortic stenosis than in normal volunteers (p < 0.001). The peak total TKE in the ascending aorta was strongly correlated to index pressure loss (R(2) = 0.91). Peak total TKE in the ascending aorta correlated strongly with irreversible pressure loss estimated by a well-established method. Direct measurement of TKE by magnetic resonance flow imaging may, with further validation, be used to estimate irreversible pressure loss in aortic stenosis. Copyright © 2013 American

  1. Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000

    Science.gov (United States)

    Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.

    2018-04-01

    The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.

  2. Development of electrical efficiency measurement techniques for 10 kW-class SOFC system: Part II. Uncertainty estimation

    International Nuclear Information System (INIS)

    Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru

    2009-01-01

    Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as ±0.12% at 95% level of confidence. Micro-gas chromatography with/without CH 4 quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty ±1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as ±0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within ±1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably

  3. Spot urine sodium measurements do not accurately estimate dietary sodium intake in chronic kidney disease12

    Science.gov (United States)

    Dougher, Carly E; Rifkin, Dena E; Anderson, Cheryl AM; Smits, Gerard; Persky, Martha S; Block, Geoffrey A; Ix, Joachim H

    2016-01-01

    Background: Sodium intake influences blood pressure and proteinuria, yet the impact on long-term outcomes is uncertain in chronic kidney disease (CKD). Accurate assessment is essential for clinical and public policy recommendations, but few large-scale studies use 24-h urine collections. Recent studies that used spot urine sodium and associated estimating equations suggest that they may provide a suitable alternative, but their accuracy in patients with CKD is unknown. Objective: We compared the accuracy of 4 equations [the Nerbass, INTERSALT (International Cooperative Study on Salt, Other Factors, and Blood Pressure), Tanaka, and Kawasaki equations] that use spot urine sodium to estimate 24-h sodium excretion in patients with moderate to advanced CKD. Design: We evaluated the accuracy of spot urine sodium to predict mean 24-h urine sodium excretion over 9 mo in 129 participants with stage 3–4 CKD. Spot morning urine sodium was used in 4 estimating equations. Bias, precision, and accuracy were assessed and compared across each equation. Results: The mean age of the participants was 67 y, 52% were female, and the mean estimated glomerular filtration rate was 31 ± 9 mL · min–1 · 1.73 m–2. The mean ± SD number of 24-h urine collections was 3.5 ± 0.8/participant, and the mean 24-h sodium excretion was 168.2 ± 67.5 mmol/d. Although the Tanaka equation demonstrated the least bias (mean: −8.2 mmol/d), all 4 equations had poor precision and accuracy. The INTERSALT equation demonstrated the highest accuracy but derived an estimate only within 30% of mean measured sodium excretion in only 57% of observations. Bland-Altman plots revealed systematic bias with the Nerbass, INTERSALT, and Tanaka equations, underestimating sodium excretion when intake was high. Conclusion: These findings do not support the use of spot urine specimens to estimate dietary sodium intake in patients with CKD and research studies enriched with patients with CKD. The parent data for this

  4. Regional inversion of CO2 ecosystem fluxes from atmospheric measurements. Reliability of the uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)

    2013-07-01

    The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than

  5. Estimating parameters of a forest ecosystem C model with measurements of stocks and fluxes as joint constraints

    Science.gov (United States)

    Andrew D. Richardson; Mathew Williams; David Y. Hollinger; David J.P. Moore; D. Bryan Dail; Eric A. Davidson; Neal A. Scott; Robert S. Evans; Holly. Hughes

    2010-01-01

    We conducted an inverse modeling analysis, using a variety of data streams (tower-based eddy covariance measurements of net ecosystem exchange, NEE, of CO2, chamber-based measurements of soil respiration, and ancillary ecological measurements of leaf area index, litterfall, and woody biomass increment) to estimate parameters and initial carbon (C...

  6. Estimation of Snow Parameters from Dual-Wavelength Airborne Radar

    Science.gov (United States)

    Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew

    1997-01-01

    Estimation of snow characteristics from airborne radar measurements would complement In-situ measurements. While In-situ data provide more detailed information than radar, they are limited in their space-time sampling. In the absence of significant cloud water contents, dual-wavelength radar data can be used to estimate 2 parameters of a drop size distribution if the snow density is assumed. To estimate, rather than assume, a snow density is difficult, however, and represents a major limitation in the radar retrieval. There are a number of ways that this problem can be investigated: direct comparisons with in-situ measurements, examination of the large scale characteristics of the retrievals and their comparison to cloud model outputs, use of LDR measurements, and comparisons to the theoretical results of Passarelli(1978) and others. In this paper we address the first approach and, in part, the second.

  7. Estimating the angular velocity of a rigid body moving in the plane from tangential and centripetal acceleration measurements

    International Nuclear Information System (INIS)

    Cardou, Philippe; Angeles, Jorge

    2008-01-01

    Two methods are available for the estimation of the angular velocity of a rigid body from point-acceleration measurements: (i) the time-integration of the angular acceleration and (ii) the square-rooting of the centripetal acceleration. The inaccuracy of the first method is due mainly to the accumulation of the error on the angular acceleration throughout the time-integration process, which does not prevent that it be used successfully in crash tests with dummies, since these experiments never last more than one second. On the other hand, the error resulting from the second method is stable through time, but becomes inaccurate whenever the rigid body angular velocity approaches zero, which occurs in many applications. In order to take advantage of the complementarity of these two methods, a fusion of their estimates is proposed. To this end, the accelerometer measurements are modeled as exact signals contaminated with bias errors and Gaussian white noise. The relations between the variables at stake are written in the form of a nonlinear state-space system in which the angular velocity and the angular acceleration are state variables. Consequently, a minimum-variance-error estimate of the state vector is obtained by means of extended Kalman filtering. The performance of the proposed estimation method is assessed by means of simulation. Apparently, the resulting estimation method is more robust than the existing accelerometer-only methods and competitive with gyroscope measurements. Moreover, it allows the identification and the compensation of any bias error in the accelerometer measurements, which is a significant advantage over gyroscopes

  8. Classical and modern power spectrum estimation for tune measurement in CSNS RCS

    International Nuclear Information System (INIS)

    Yang Xiaoyu; Xu Taoguang; Fu Shinian; Zeng Lei; Bian Xiaojuan

    2013-01-01

    Precise measurement of betatron tune is required for good operating condition of CSNS RCS. The fractional part of betatron tune is important and it can be measured by analyzing the signals of beam position from the appointed BPM. Usually these signals are contaminated during the acquisition process, therefore several power spectrum methods are used to improve the frequency resolution. In this article classical and modern power spectrum methods are used. In order to compare their performance, the results of simulation data and IQT data from J-PARC RCS are discussed. It is shown that modern power spectrum estimation has better performance than the classical ones, though the calculation is more complex. (authors)

  9. Estimation of Total Tree Height from Renewable Resources Evaluation Data

    Science.gov (United States)

    Charles E. Thomas

    1981-01-01

    Many ecological, biological, and genetic studies use the measurement of total tree height. Until recently, the Southern Forest Experiment Station's inventory procedures through Renewable Resources Evaluation (RRE) have not included total height measurements. This note provides equations to estimate total height based on other RRE measurements.

  10. Kinetic parameter estimation from attenuated SPECT projection measurements

    International Nuclear Information System (INIS)

    Reutter, B.W.; Gullberg, G.T.

    1998-01-01

    Conventional analysis of dynamically acquired nuclear medicine data involves fitting kinetic models to time-activity curves generated from regions of interest defined on a temporal sequence of reconstructed images. However, images reconstructed from the inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system can contain artifacts that lead to biases in the estimated kinetic parameters. To overcome this problem the authors investigated the estimation of kinetic parameters directly from projection data by modeling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated transverse slice, kinetic parameters were estimated for simple one compartment models for three myocardial regions of interest, as well as for the liver. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated data had biases ranging between 1--63%. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Predicted uncertainties (standard deviations) of the parameters obtained for 500,000 detected events ranged between 2--31% for the myocardial uptake parameters and 2--23% for the myocardial washout parameters

  11. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors

    International Nuclear Information System (INIS)

    Kinnamon, Daniel D; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L; Lipsitz, Stuart R

    2010-01-01

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not

  12. Inclusion estimation from a single electrostatic boundary measurement

    DEFF Research Database (Denmark)

    Karamehmedovic, Mirza; Knudsen, Kim

    2013-01-01

    We present a numerical method for the detection and estimation of perfectly conducting inclusions in conducting homogeneous host media in . The estimation is based on the evaluation of an indicator function that depends on a single pair of Cauchy data (electric potential and current) given at the...

  13. Ecosystem services provided by bats.

    Science.gov (United States)

    Kunz, Thomas H; Braun de Torrez, Elizabeth; Bauer, Dana; Lobova, Tatyana; Fleming, Theodore H

    2011-03-01

    Ecosystem services are the benefits obtained from the environment that increase human well-being. Economic valuation is conducted by measuring the human welfare gains or losses that result from changes in the provision of ecosystem services. Bats have long been postulated to play important roles in arthropod suppression, seed dispersal, and pollination; however, only recently have these ecosystem services begun to be thoroughly evaluated. Here, we review the available literature on the ecological and economic impact of ecosystem services provided by bats. We describe dietary preferences, foraging behaviors, adaptations, and phylogenetic histories of insectivorous, frugivorous, and nectarivorous bats worldwide in the context of their respective ecosystem services. For each trophic ensemble, we discuss the consequences of these ecological interactions on both natural and agricultural systems. Throughout this review, we highlight the research needed to fully determine the ecosystem services in question. Finally, we provide a comprehensive overview of economic valuation of ecosystem services. Unfortunately, few studies estimating the economic value of ecosystem services provided by bats have been conducted to date; however, we outline a framework that could be used in future studies to more fully address this question. Consumptive goods provided by bats, such as food and guano, are often exchanged in markets where the market price indicates an economic value. Nonmarket valuation methods can be used to estimate the economic value of nonconsumptive services, including inputs to agricultural production and recreational activities. Information on the ecological and economic value of ecosystem services provided by bats can be used to inform decisions regarding where and when to protect or restore bat populations and associated habitats, as well as to improve public perception of bats. © 2011 New York Academy of Sciences.

  14. Estimating product-to-product variations in metal forming using force measurements

    Science.gov (United States)

    Havinga, Jos; van den Boogaard, Ton

    2017-10-01

    The limits of production accuracy of metal forming processes can be stretched by the development of control systems for compensation of product-to-product variations. Such systems require the use of measurements from each semi-finished product. These measurements must be used to estimate the final quality of each product. We propose to predict part of the product-to-product variations in multi-stage forming processes based on force measurements from previous process stages. The reasoning is that final product properties as well as process forces are expected to be correlated since they are both affected by material and process variation. In this study, an approach to construct a moving window process model based on historical data from the process is presented. These regression models can be built and updated in real-time during production. The approach is tested with data from a demonstrator process with cutting, deep drawing and bending stages. It is shown that part of the product-to-product variations in the process can be predicted with the developed process model.

  15. Lightweight, Miniature Inertial Measurement System

    Science.gov (United States)

    Tang, Liang; Crassidis, Agamemnon

    2012-01-01

    A miniature, lighter-weight, and highly accurate inertial navigation system (INS) is coupled with GPS receivers to provide stable and highly accurate positioning, attitude, and inertial measurements while being subjected to highly dynamic maneuvers. In contrast to conventional methods that use extensive, groundbased, real-time tracking and control units that are expensive, large, and require excessive amounts of power to operate, this method focuses on the development of an estimator that makes use of a low-cost, miniature accelerometer array fused with traditional measurement systems and GPS. Through the use of a position tracking estimation algorithm, onboard accelerometers are numerically integrated and transformed using attitude information to obtain an estimate of position in the inertial frame. Position and velocity estimates are subject to drift due to accelerometer sensor bias and high vibration over time, and so require the integration with GPS information using a Kalman filter to provide highly accurate and reliable inertial tracking estimations. The method implemented here uses the local gravitational field vector. Upon determining the location of the local gravitational field vector relative to two consecutive sensors, the orientation of the device may then be estimated, and the attitude determined. Improved attitude estimates further enhance the inertial position estimates. The device can be powered either by batteries, or by the power source onboard its target platforms. A DB9 port provides the I/O to external systems, and the device is designed to be mounted in a waterproof case for all-weather conditions.

  16. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

    2014-07-15

    previously published pediatric patient doses that accounted for patient size in their dose calculation, and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusions: For organs fully covered within the scan volume, the average correlation of SSDE and organ absolute dose was found to be better than ±10%. In addition, this study provides a complete list of organ dose correlation factors (CF{sub SSDE}{sup organ}) for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.

  17. Bias in tensor based morphometry Stat-ROI measures may result in unrealistic power estimates.

    Science.gov (United States)

    Thompson, Wesley K; Holland, Dominic

    2011-07-01

    A series of reports have recently appeared using tensor based morphometry statistically-defined regions of interest, Stat-ROIs, to quantify longitudinal atrophy in structural MRIs from the Alzheimer's Disease Neuroimaging Initiative (ADNI). This commentary focuses on one of these reports, Hua et al. (2010), but the issues raised here are relevant to the others as well. Specifically, we point out a temporal pattern of atrophy in subjects with Alzheimer's disease and mild cognitive impairment whereby the majority of atrophy in two years occurs within the first 6 months, resulting in overall elevated estimated rates of change. Using publicly-available ADNI data, this temporal pattern is also found in a group of identically-processed healthy controls, strongly suggesting that methodological bias is corrupting the measures. The resulting bias seriously impacts the validity of conclusions reached using these measures; for example, sample size estimates reported by Hua et al. (2010) may be underestimated by a factor of five to sixteen. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Estimation of Dynamic Errors in Laser Optoelectronic Dimension Gauges for Geometric Measurement of Details

    Directory of Open Access Journals (Sweden)

    Khasanov Zimfir

    2018-01-01

    Full Text Available The article reviews the capabilities and particularities of the approach to the improvement of metrological characteristics of fiber-optic pressure sensors (FOPS based on estimation estimation of dynamic errors in laser optoelectronic dimension gauges for geometric measurement of details. It is shown that the proposed criteria render new methods for conjugation of optoelectronic converters in the dimension gauge for geometric measurements in order to reduce the speed and volume requirements for the Random Access Memory (RAM of the video controller which process the signal. It is found that the lower relative error, the higher the interrogetion speed of the CCD array. It is shown that thus, the maximum achievable dynamic accuracy characteristics of the optoelectronic gauge are determined by the following conditions: the parameter stability of the electronic circuits in the CCD array and the microprocessor calculator; linearity of characteristics; error dynamics and noise in all electronic circuits of the CCD array and microprocessor calculator.

  19. Estimation of Penetrated Bone Layers During Craniotomy via Bioimpedance Measurement.

    Science.gov (United States)

    Teichmann, Daniel; Rohe, Lucas; Niesche, Annegret; Mueller, Meiko; Radermacher, Klaus; Leonhardt, Steffen

    2017-04-01

    Craniotomy is the removal of a bone flap from the skull and is a first step in many neurosurgical interventions. During craniotomy, an efficient cut of the bone without injuring adjoining soft tissues is very critical. The aim of this study is to investigate the feasibility of estimating the currently penetrated cranial bone layer by means of bioimpedance measurement. A finite-element model was developed and a simulation study conducted. Simulations were performed at different positions along an elliptical cutting path and at three different operation areas. Finally, the validity of the simulation was demonstrated by an ex vivo experiment based on use of a bovine shoulder blade bone and a commercially available impedance meter. The curve of the absolute impedance and phase exhibits characteristic changes at the transition from one bone layer to the next, which can be used to determine the bone layer last penetrated by the cutting tool. The bipolar electrode configuration is superior to the monopolar measurement. A horizontal electrode arrangement at the tip of the cutting tool produces the best results. This study successfully demonstrates the feasibility to detect the transition between cranial bone layers during craniotomy by bioimpedance measurements using electrodes located on the cutting tool. Based on the results of this study, bioimpedance measurement seems to be a promising option for intra operative ad hoc information about the bone layer currently penetrated and could contribute to patient safety during neurosurgery.

  20. Comparison of NIS and NHIS/NIPRCS vaccination coverage estimates. National Immunization Survey. National Health Interview Survey/National Immunization Provider Record Check Study.

    Science.gov (United States)

    Bartlett, D L; Ezzati-Rice, T M; Stokley, S; Zhao, Z

    2001-05-01

    The National Immunization Survey (NIS) and the National Health Interview Survey (NHIS) produce national coverage estimates for children aged 19 months to 35 months. The NIS is a cost-effective, random-digit-dialing telephone survey that produces national and state-level vaccination coverage estimates. The National Immunization Provider Record Check Study (NIPRCS) is conducted in conjunction with the annual NHIS, which is a face-to-face household survey. As the NIS is a telephone survey, potential coverage bias exists as the survey excludes children living in nontelephone households. To assess the validity of estimates of vaccine coverage from the NIS, we compared 1995 and 1996 NIS national estimates with results from the NHIS/NIPRCS for the same years. Both the NIS and the NHIS/NIPRCS produce similar results. The NHIS/NIPRCS supports the findings of the NIS.

  1. Activity assays and immunoassays for plasma Renin and prorenin: information provided and precautions necessary for accurate measurement

    DEFF Research Database (Denmark)

    Campbell, Duncan J; Nussberger, Juerg; Stowasser, Michael

    2009-01-01

    into focus the differences in information provided by activity assays and immunoassays for renin and prorenin measurement and has drawn attention to the need for precautions to ensure their accurate measurement. CONTENT: Renin activity assays and immunoassays provide related but different information...... provided by these assays and of the precautions necessary to ensure their accuracy....

  2. Validating a mass balance accounting approach to using 7Be measurements to estimate event-based erosion rates over an extended period at the catchment scale

    Science.gov (United States)

    Porto, Paolo; Walling, Des E.; Cogliandro, Vanessa; Callegari, Giovanni

    2016-07-01

    Use of the fallout radionuclides cesium-137 and excess lead-210 offers important advantages over traditional methods of quantifying erosion and soil redistribution rates. However, both radionuclides provide information on longer-term (i.e., 50-100 years) average rates of soil redistribution. Beryllium-7, with its half-life of 53 days, can provide a basis for documenting short-term soil redistribution and it has been successfully employed in several studies. However, the approach commonly used introduces several important constraints related to the timing and duration of the study period. A new approach proposed by the authors that overcomes these constraints has been successfully validated using an erosion plot experiment undertaken in southern Italy. Here, a further validation exercise undertaken in a small (1.38 ha) catchment is reported. The catchment was instrumented to measure event sediment yields and beryllium-7 measurements were employed to document the net soil loss for a series of 13 events that occurred between November 2013 and June 2015. In the absence of significant sediment storage within the catchment's ephemeral channel system and of a significant contribution from channel erosion to the measured sediment yield, the estimates of net soil loss for the individual events could be directly compared with the measured sediment yields to validate the former. The close agreement of the two sets of values is seen as successfully validating the use of beryllium-7 measurements and the new approach to obtain estimates of net soil loss for a sequence of individual events occurring over an extended period at the scale of a small catchment.

  3. Husbandry Emissions Estimation: Fusion of Mobile Surface and Airborne Remote Sensing and Mobile Surface In Situ Measurements

    Science.gov (United States)

    Leifer, I.; Hall, J. L.; Melton, C.; Tratt, D. M.; Chang, C. S.; Buckland, K. N.; Frash, J.; Leen, J. B.; Van Damme, M.; Clarisse, L.

    2017-12-01

    Emissions of methane and ammonia from intensive animal husbandry are important drivers of climate and photochemical and aerosol pollution. Husbandry emission estimates are somewhat uncertain because of their dependence on practices, temperature, micro-climate, and other factors, leading to variations in emission factors up to an order-of-magnitude. Mobile in situ measurements are increasingly being applied to derive trace gas emissions by Gaussian plume inversion; however, inversion with incomplete information can lead to erroneous emissions and incorrect source location. Mobile in situ concentration and wind data and mobile remote sensing column data from the Chino Dairy Complex in the Los Angeles Basin were collected near simultaneously (within 1-10 s, depending on speed) while transecting plumes, approximately orthogonal to winds. This analysis included airborne remote sensing trace gas information. MISTIR collected vertical column FTIR data simultaneously with in situ concentration data acquired by the AMOG-Surveyor while both vehicles traveled in convoy. The column measurements are insensitive to the turbulence characterization needed in Gaussian plume inversion of concentration data and thus provide a flux reference for evaluating in situ data inversions. Four different approaches were used on inversions for a single dairy, and also for the aggregate dairy complex plume. Approaches were based on differing levels of "knowledge" used in the inversion from solely the in situ platform and a single gas to a combination of information from all platforms and multiple gases. Derived dairy complex fluxes differed significantly from those estimated by other studies of the Chino complex. Analysis of long term satellite data showed that this most likely results from seasonality effects, highlighting the pitfalls of applying annualized extensions of flux measurements to a single campaign instantiation.

  4. Focused ultrasound transducer spatial peak intensity estimation: a comparison of methods

    Science.gov (United States)

    Civale, John; Rivens, Ian; Shaw, Adam; ter Haar, Gail

    2018-03-01

    Characterisation of the spatial peak intensity at the focus of high intensity focused ultrasound transducers is difficult because of the risk of damage to hydrophone sensors at the high focal pressures generated. Hill et al (1994 Ultrasound Med. Biol. 20 259-69) provided a simple equation for estimating spatial-peak intensity for solid spherical bowl transducers using measured acoustic power and focal beamwidth. This paper demonstrates theoretically and experimentally that this expression is only strictly valid for spherical bowl transducers without a central (imaging) aperture. A hole in the centre of the transducer results in over-estimation of the peak intensity. Improved strategies for determining focal peak intensity from a measurement of total acoustic power are proposed. Four methods are compared: (i) a solid spherical bowl approximation (after Hill et al 1994 Ultrasound Med. Biol. 20 259-69), (ii) a numerical method derived from theory, (iii) a method using measured sidelobe to focal peak pressure ratio, and (iv) a method for measuring the focal power fraction (FPF) experimentally. Spatial-peak intensities were estimated for 8 transducers at three drive powers levels: low (approximately 1 W), moderate (~10 W) and high (20-70 W). The calculated intensities were compared with those derived from focal peak pressure measurements made using a calibrated hydrophone. The FPF measurement method was found to provide focal peak intensity estimates that agreed most closely (within 15%) with the hydrophone measurements, followed by the pressure ratio method (within 20%). The numerical method was found to consistently over-estimate focal peak intensity (+40% on average), however, for transducers with a central hole it was more accurate than using the solid bowl assumption (+70% over-estimation). In conclusion, the ability to make use of an automated beam plotting system, and a hydrophone with good spatial resolution, greatly facilitates characterisation of the FPF, and

  5. Opportunities and challenges for evaluating precipitation estimates during GPM mission

    Energy Technology Data Exchange (ETDEWEB)

    Amitai, E. [George Mason Univ. and NASA Goddard Space Flight Center, Greenbelt, MD (United States); NASA Goddard Space Flight Center, Greenbelt, MD (United States); Llort, X.; Sempere-Torres, D. [GRAHI/Univ. Politecnica de Catalunya, Barcelona (Spain)

    2006-10-15

    Data assimilation in conjunction with numerical weather prediction and a variety of hydrologic applications now depend on satellite observations of precipitation. However, providing values of precipitation is not sufficient unless they are accompanied by the associated uncertainty estimates. The main approach of quantifying satellite precipitation uncertainties generally requires establishment of reliable uncertainty estimates for the ground validation rainfall products. This paper discusses several of the relevant validation concepts evolving from the tropical rainfall measuring mission (TRMM) era to the global precipitation measurement mission (GPM) era in the context of determining and reducing uncertainties of ground and space-based radar rainfall estimates. From comparisons of probability distribution functions of rain rates derived from TRMM precipitation radar and co-located ground based radar data - using the new NASA TRMM radar rainfall products (version 6) - this paper provides (1) a brief review of the importance of comparing pdfs of rain rate for statistical and physical verification of space-borne radar estimates of precipitation; (2) a brief review of how well the ground validation estimates compare to the TRMM radar retrieved estimates; and (3) discussion on opportunities and challenges to determine and reduce the uncertainties in space-based and ground-based radar estimates of rain rate distributions. (orig.)

  6. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  7. The variance of the locally measured Hubble parameter explained with different estimators

    DEFF Research Database (Denmark)

    Odderskov, Io Sandberg Hess; Hannestad, Steen; Brandbyge, Jacob

    2017-01-01

    We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N......-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend...... to obtain a smaller variance than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H0 from CMB measurements and the value measured in the local universe, these considerations are important in light...

  8. Estimating values for the moisture source load and buffering capacities from indoor climate measurements

    NARCIS (Netherlands)

    Schijndel, van A.W.M.

    2008-01-01

    The objective of this study is to investigate the potential for estimating values for the total size of human induced moisture source load and the total buffering (moisture storage) capacity of the interior objects with the use of relatively simple measurements and the use of heat, air, and moisture

  9. Comparison of Satellite Rainfall Estimates and Rain Gauge Measurements in Italy, and Impact on Landslide Modeling

    Directory of Open Access Journals (Sweden)

    Mauro Rossi

    2017-12-01

    Full Text Available Landslides can be triggered by intense or prolonged rainfall. Rain gauge measurements are commonly used to predict landslides even if satellite rainfall estimates are available. Recent research focuses on the comparison of satellite estimates and gauge measurements. The rain gauge data from the Italian network (collected in the system database “Verifica Rischio Frana”, VRF are compared with the National Aeronautics and Space Administration (NASA Tropical Rainfall Measuring Mission (TRMM products. For the purpose, we couple point gauge and satellite rainfall estimates at individual grid cells, evaluating the correlation between gauge and satellite data in different morpho-climatological conditions. We then analyze the statistical distributions of both rainfall data types and the rainfall events derived from them. Results show that satellite data underestimates ground data, with the largest differences in mountainous areas. Power-law models, are more appropriate to correlate gauge and satellite data. The gauge and satellite-based products exhibit different statistical distributions and the rainfall events derived from them differ. In conclusion, satellite rainfall cannot be directly compared with ground data, requiring local investigation to account for specific morpho-climatological settings. Results suggest that satellite data can be used for forecasting landslides, only performing a local scaling between satellite and ground data.

  10. Measurement of the incorporation rates of four amino acids into proteins for estimating bacterial production.

    Science.gov (United States)

    Servais, P

    1995-03-01

    In aquatic ecosystems, [(3)H]thymidine incorporation into bacterial DNA and [(3)H]leucine incorporation into proteins are usually used to estimate bacterial production. The incorporation rates of four amino acids (leucine, tyrosine, lysine, alanine) into proteins of bacteria were measured in parallel on natural freshwater samples from the basin of the river Meuse (Belgium). Comparison of the incorporation into proteins and into the total macromolecular fraction showed that these different amino acids were incorporated at more than 90% into proteins. From incorporation measurements at four subsaturated concentrations (range, 2-77 nm), the maximum incorporation rates were determined. Strong correlations (r > 0.91 for all the calculated correlations) were found between the maximum incorporation rates of the different tested amino acids over a range of two orders of magnitude of bacterial activity. Bacterial production estimates were calculated using theoretical and experimental conversion factors. The productions calculated from the incorporation rates of the four amino acids were in good concordance, especially when the experimental conversion factors were used (slope range, 0.91-1.11, and r > 0.91). This study suggests that the incorporation of various amino acids into proteins can be used to estimate bacterial production.

  11. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

  12. A New Proxy Measurement Algorithm with Application to the Estimation of Vertical Ground Reaction Forces Using Wearable Sensors.

    Science.gov (United States)

    Guo, Yuzhu; Storm, Fabio; Zhao, Yifan; Billings, Stephen A; Pavic, Aleksandar; Mazzà, Claudia; Guo, Ling-Zhong

    2017-09-22

    Measurement of the ground reaction forces (GRF) during walking is typically limited to laboratory settings, and only short observations using wearable pressure insoles have been reported so far. In this study, a new proxy measurement method is proposed to estimate the vertical component of the GRF (vGRF) from wearable accelerometer signals. The accelerations are used as the proxy variable. An orthogonal forward regression algorithm (OFR) is employed to identify the dynamic relationships between the proxy variables and the measured vGRF using pressure-sensing insoles. The obtained model, which represents the connection between the proxy variable and the vGRF, is then used to predict the latter. The results have been validated using pressure insoles data collected from nine healthy individuals under two outdoor walking tasks in non-laboratory settings. The results show that the vGRFs can be reconstructed with high accuracy (with an average prediction error of less than 5.0%) using only one wearable sensor mounted at the waist (L5, fifth lumbar vertebra). Proxy measures with different sensor positions are also discussed. Results show that the waist acceleration-based proxy measurement is more stable with less inter-task and inter-subject variability than the proxy measures based on forehead level accelerations. The proposed proxy measure provides a promising low-cost method for monitoring ground reaction forces in real-life settings and introduces a novel generic approach for replacing the direct determination of difficult to measure variables in many applications.

  13. A New Proxy Measurement Algorithm with Application to the Estimation of Vertical Ground Reaction Forces Using Wearable Sensors

    Directory of Open Access Journals (Sweden)

    Yuzhu Guo

    2017-09-01

    Full Text Available Measurement of the ground reaction forces (GRF during walking is typically limited to laboratory settings, and only short observations using wearable pressure insoles have been reported so far. In this study, a new proxy measurement method is proposed to estimate the vertical component of the GRF (vGRF from wearable accelerometer signals. The accelerations are used as the proxy variable. An orthogonal forward regression algorithm (OFR is employed to identify the dynamic relationships between the proxy variables and the measured vGRF using pressure-sensing insoles. The obtained model, which represents the connection between the proxy variable and the vGRF, is then used to predict the latter. The results have been validated using pressure insoles data collected from nine healthy individuals under two outdoor walking tasks in non-laboratory settings. The results show that the vGRFs can be reconstructed with high accuracy (with an average prediction error of less than 5.0% using only one wearable sensor mounted at the waist (L5, fifth lumbar vertebra. Proxy measures with different sensor positions are also discussed. Results show that the waist acceleration-based proxy measurement is more stable with less inter-task and inter-subject variability than the proxy measures based on forehead level accelerations. The proposed proxy measure provides a promising low-cost method for monitoring ground reaction forces in real-life settings and introduces a novel generic approach for replacing the direct determination of difficult to measure variables in many applications.

  14. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    Science.gov (United States)

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.

  15. Longshore sediment transport rate-measurement and estimation, central west coast of India

    Digital Repository Service at National Institute of Oceanography (India)

    SanilKumar, V.; Anand, N.M.; Chandramohan, P.; Naik, G.N.

    rate—measurement and estimation, central west coast of India V. Sanil Kumar * , N.M. Anand, P. Chandramohan, G.N. Naik Ocean Engineering Division, National Institute of Oceanography, Donapaula, Goa 403 004, India Received 26 October 2001; received... engineering designs. The longshore current generated by obliquely incident breaking waves plays an important role in transporting sediment in the surf zone. The longshore current velocity varies across the surf zone, reaching a maximum value close to the wave...

  16. Estimating the relative water content of leaves in a cotton canopy

    Science.gov (United States)

    Vanderbilt, Vern; Daughtry, Craig; Kupinski, Meredith; Bradley, Christine; French, Andrew; Bronson, Kevin; Chipman, Russell; Dahlgren, Robert

    2017-08-01

    Remotely sensing plant canopy water status remains a long term goal of remote sensing research. Established approaches to estimating canopy water status — the Crop Water Stress Index, the Water Deficit Index and the Equivalent Water Thickness — involve measurements in the thermal or reflective infrared. Here we report plant water status estimates based upon analysis of polarized visible imagery of a cotton canopy measured by ground Multi-Spectral Polarization Imager (MSPI). Such estimators potentially provide access to the plant hydrological photochemistry that manifests scattering and absorption effects in the visible spectral region.

  17. Atmospheric Inverse Estimates of Methane Emissions from Central California

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Chuanfeng; Andrews, Arlyn E.; Bianco, Laura; Eluszkiewicz, Janusz; Hirsch, Adam; MacDonald, Clinton; Nehrkorn, Thomas; Fischer, Marc L.

    2008-11-21

    Methane mixing ratios measured at a tall-tower are compared to model predictions to estimate surface emissions of CH{sub 4} in Central California for October-December 2007 using an inverse technique. Predicted CH{sub 4} mixing ratios are calculated based on spatially resolved a priori CH{sub 4} emissions and simulated atmospheric trajectories. The atmospheric trajectories, along with surface footprints, are computed using the Weather Research and Forecast (WRF) coupled to the Stochastic Time-Inverted Lagrangian Transport (STILT) model. An uncertainty analysis is performed to provide quantitative uncertainties in estimated CH{sub 4} emissions. Three inverse model estimates of CH{sub 4} emissions are reported. First, linear regressions of modeled and measured CH{sub 4} mixing ratios obtain slopes of 0.73 {+-} 0.11 and 1.09 {+-} 0.14 using California specific and Edgar 3.2 emission maps respectively, suggesting that actual CH{sub 4} emissions were about 37 {+-} 21% higher than California specific inventory estimates. Second, a Bayesian 'source' analysis suggests that livestock emissions are 63 {+-} 22% higher than the a priori estimates. Third, a Bayesian 'region' analysis is carried out for CH{sub 4} emissions from 13 sub-regions, which shows that inventory CH{sub 4} emissions from the Central Valley are underestimated and uncertainties in CH{sub 4} emissions are reduced for sub-regions near the tower site, yielding best estimates of flux from those regions consistent with 'source' analysis results. The uncertainty reductions for regions near the tower indicate that a regional network of measurements will be necessary to provide accurate estimates of surface CH{sub 4} emissions for multiple regions.

  18. Development of formulas for the estimation of renal depth and application in the measurement of glomerular filtration rate in Koreans

    International Nuclear Information System (INIS)

    Yoo, Ie Ryung; Kim, Sung Hoon; Chung, Yong An

    2000-01-01

    There is no established formula for estimating renal depths in Korean. As a result, we undertook this study to develop a new formula, and to apply this formula in the calculation of glomerular filtration rate (GFR). We measured the renal depth (RD) on the abdominal CT obtained in 300 adults (M:F=167:133, mean age 50.9 years) without known renal diseases. The RDs measured by CT were compared with the estimated RDs based on the Tonnesen and Taylor equations. New formulas were derived from the measured RDs in 200 out of 300 patients based on several variables such as sex, age, weight, and height by multiple regression analysis. The RDs estimated from the new formulas were compared with the measured RDs in the remaining 100 patients as a control. In 48 patients who underwent Tc-99m DTPA renal scintigraphy, GFR was measured with three equations (new formula, Tonnesen and Taylor equations), respectively, and compared with each other. The mean values of the RDs measured from CT were 6.9 cm for right kidney of the men (MRK), 6.7 cm for left kidney of the men (MLK), 6.7 cm for right kidney of the women (WRK), and 6.6 cm for left kidney of the men (WLK). The RDs estimated from Tonnesen equation were shorter than the ones measured from CT significantly. The newly derived formulas were 12.813 (weight/height) +0.002 (age) +2.264 for MRK, 15.344 (weight/height)+0.011 (age)+0.557 for for MLK,12.936 (weight/height)+0.014 (age)+1.462 for WRK and 13.488 (weight/height)+0.019 (age)+0.762 for WLK. The correlation coefficients of the RD measured from CT and estimated from the new formula were 0.529 in MRK, 0.729 in MLK, 0.601 in WRK, and 0.724 in WLK, respectively. The GFRs from the new formula were significantly higher than those from the Tonnesen equation significantly, which was the most similar to normal GFR values. We generated new formulas for estimating RD in Korean from the data by CT. By adopting these formulas, we expect that GFR can be measured by the Gates method accurately

  19. Measurement of soil contamination by radionuclides due to the Fukushima Dai-ichi Nuclear Power Plant accident and associated estimated cumulative external dose estimation

    International Nuclear Information System (INIS)

    Endo, S.; Kimura, S.; Takatsuji, T.; Nanasawa, K.; Imanaka, T.; Shizuma, K.

    2012-01-01

    Soil sampling was carried out at an early stage of the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident. Samples were taken from areas around FDNPP, at four locations northwest of FDNPP, at four schools and in four cities, including Fukushima City. Radioactive contaminants in soil samples were identified and measured by using a Ge detector and included 129m Te, 129 Te, 131 I, 132 Te, 132 I, 134 Cs, 136 Cs, 137 Cs, 140 Ba and 140 La. The highest soil depositions were measured to the northwest of FDNPP. From this soil deposition data, variations in dose rates over time and the cumulative external doses at the locations for 3 months and 1 y after deposition were estimated. At locations northwest of FDNPP, the external dose rate at 3 months after deposition was 4.8–98 μSv/h and the cumulative dose for 1 y was 51 to 1.0 × 10 3 mSv; the highest values were at Futaba Yamada. At the four schools, which were used as evacuation shelters, and in the four urban cities, the external dose rate at 3 months after deposition ranged from 0.03 to 3.8 μSv/h and the cumulative doses for 1 y ranged from 3 to 40 mSv. The cumulative dose at Fukushima Niihama Park was estimated as the highest in the four cities. The estimated external dose rates and cumulative doses show that careful countermeasures and remediation will be needed as a result of the accident, and detailed measurements of radionuclide deposition densities in soil will be important input data to conduct these activities.

  20. Estimation of 244Cm intake by bioassay measurements following a contamination incident

    International Nuclear Information System (INIS)

    Thein, M.; Bogard, J.S.; Eckerman, K.F.

    1990-01-01

    An employee was contaminated with radioactive material consisting primarily of 244 Cm and 246 Cm as a consequence of handling a curium nitrate solution at a reprocessing facility. In vivo gamma analysis and in vitro (urine and fecal) bioassay measurements were performed. A sample of the curium solution from the workplace was obtained to confirm that the nitrate was the chemical form and to identify the isotopes of curium present. The mass ratio of 244 Cm/ 246 Cm was determined to be 91 to 7. Observed excretion rates were consistent with available information on curium. The results of the in vivo and in vitro measurements are presented and intake estimates for the incident are developed. (author) 11 refs.; 3 figs.; 2 tabs

  1. Measurement and estimated health risks of volatile organic compounds and polychlorinated biphenyls in air at the Hanford Site

    International Nuclear Information System (INIS)

    Patton, G.W.; Cooper, A.T.; Blanton, M.L.

    1994-10-01

    A variety of radioactive and nonradioactive chemicals have been released in effluent streams and discharged to waste disposal facilities during the nuclear materials production period at the Hanford Site. Extensive environmental surveillance for radioactive materials has occurred at Hanford; however, only limited information is available on the types and concentrations of organic pollutants potentially present. This report describes work performed to provide the Hanford Site Surface Environmental Surveillance Project with representative air concentration data for volatile organic compounds and polychlorinated biphenyls (PCBs). US Environmental Protection Agency (USEPA) volatile organic compound sampling methods evaluated for Hanford Site use were carbon-based adsorbent traps (TO-2) and Summa air canisters (TO-14). Polychlorinated biphenyls were sampled using USEPA method (TO-4), which uses glass fiber filters and polyurethane foam adsorbent beds to collect the PCBS. This report also presents results for environmental surveillance samples collected for volatile organic compound and PCB analyses from 1990 to 1993. All measured air concentrations of volatile organic compounds and PCBs were well below applicable maximum allowable concentration standards for air contaminants. Because of the lack of ambient air concentration standards, a conservative estimate is provided of the potential human health impacts from exposure to the ambient air concentrations measured on the Hanford Site

  2. Estimating Aboveground Forest Carbon Stock of Major Tropical Forest Land Uses Using Airborne Lidar and Field Measurement Data in Central Sumatra

    Science.gov (United States)

    Thapa, R. B.; Watanabe, M.; Motohka, T.; Shiraishi, T.; shimada, M.

    2013-12-01

    including rubber, acacia, oil palm, and coconut. To cover these variations of forest type, eight LiDAR transacts crossing 60 (1-ha size) field plots were acquired for calibrating the models. The field plots consisted of AFCS ranging from 4 - 161 Mg /ha. The calibrated LiDAR to AFCS general model enabled to predict the AFCS with R2 = 0.87 and root mean square errors (RMSE) = 17.4 Mg /ha. The specific AFCS models provided carbon estimates, varied by forest types, with R2 ranging from 0.72 - 0.97 and uncertainty (RMSE) ranging from 1.4 - 10.7 Mg /ha. Using these models, AFCS maps were prepared for the LiDAR coverage that provided AFCS estimates for 8,000 ha offering larger ground sampling measurements for calibration of SAR based carbon mapping model to wider region of Sumatra.

  3. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    Science.gov (United States)

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. Copyright © 2016. Published by Elsevier Ltd.

  4. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    Science.gov (United States)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  5. Estimation of breeding values using selected pedigree records.

    Science.gov (United States)

    Morton, Richard; Howarth, Jordan M

    2005-06-01

    Fish bred in tanks or ponds cannot be easily tagged individually. The parentage of any individual may be determined by DNA fingerprinting, but is sufficiently expensive that large numbers cannot be so finger-printed. The measurement of the objective trait can be made on a much larger sample relatively cheaply. This article deals with experimental designs for selecting individuals to be finger-printed and for the estimation of the individual and family breeding values. The general setup provides estimates for both genetic effects regarded as fixed or random and for fixed effects due to known regressors. The family effects can be well estimated when even very small numbers are finger-printed, provided that they are the individuals with the most extreme phenotypes.

  6. Improved Ribosome-Footprint and mRNA Measurements Provide Insights into Dynamics and Regulation of Yeast Translation

    Science.gov (United States)

    2016-02-11

    unlimited. Improved Ribosome-Footprint and mRNA Measurements Provide Insights into Dynamics and Regulation of Yeast Translation The views, opinions and...into Dynamics and Regulation of Yeast Translation Report Title Ribosome-footprint profiling provides genome-wide snapshots of translation, but...tend to slow translation. With the improved mRNA measurements, the variation attributable to translational control in exponentially growing yeast was

  7. Star-sensor-based predictive Kalman filter for satelliteattitude estimation

    Institute of Scientific and Technical Information of China (English)

    林玉荣; 邓正隆

    2002-01-01

    A real-time attitude estimation algorithm, namely the predictive Kalman filter, is presented. This algorithm can accurately estimate the three-axis attitude of a satellite using only star sensor measurements. The implementation of the filter includes two steps: first, predicting the torque modeling error, and then estimating the attitude. Simulation results indicate that the predictive Kalman filter provides robust performance in the presence of both significant errors in the assumed model and in the initial conditions.

  8. Performance of the measures of processes of care for adults and service providers in rehabilitation settings.

    Science.gov (United States)

    Bamm, Elena L; Rosenbaum, Peter; Wilkins, Seanne; Stratford, Paul

    2015-01-01

    In recent years, client-centered care has been embraced as a new philosophy of care by many organizations around the world. Clinicians and researchers have identified the need for valid and reliable outcome measures that are easy to use to evaluate success of implementation of new concepts. The current study was developed to complete adaptation and field testing of the companion patient-reported measures of processes of care for adults (MPOC-A) and the service provider self-reflection measure of processes of care for service providers working with adult clients (MPOC-SP(A)). A validation study. In-patient rehabilitation facilities. MPOC-A and measure of processes of care for service providers working with adult clients (MPOC-SP(A)). Three hundred and eighty-four health care providers, 61 patients, and 16 family members completed the questionnaires. Good to excellent internal consistency (0.71-0.88 for health care professionals, 0.82-0.90 for patients, and 0.87-0.94 for family members), as well as moderate to good correlations between domains (0.40-0.78 for health care professionals and 0.52-0.84 for clients) supported internal reliability of the tools. Exploratory factor analysis of the MPOC-SP(A) responses supported the multidimensionality of the questionnaire. MPOC-A and MPOC-SP(A) are valid and reliable tools to assess patient and service-provider accounts, respectively, of the extent to which they experience, or are able to provide, client-centered service. Research should now be undertaken to explore in more detail the relationships between client experience and provider reports of their own behavior.

  9. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  10. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  11. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  12. Solar Irradiance Measurements Using Smart Devices: A Cost-Effective Technique for Estimation of Solar Irradiance for Sustainable Energy Systems

    Directory of Open Access Journals (Sweden)

    Hussein Al-Taani

    2018-02-01

    Full Text Available Solar irradiance measurement is a key component in estimating solar irradiation, which is necessary and essential to design sustainable energy systems such as photovoltaic (PV systems. The measurement is typically done with sophisticated devices designed for this purpose. In this paper we propose a smartphone-aided setup to estimate the solar irradiance in a certain location. The setup is accessible, easy to use and cost-effective. The method we propose does not have the accuracy of an irradiance meter of high precision but has the advantage of being readily accessible on any smartphone. It could serve as a quick tool to estimate irradiance measurements in the preliminary stages of PV systems design. Furthermore, it could act as a cost-effective educational tool in sustainable energy courses where understanding solar radiation variations is an important aspect.

  13. Measurements of the solar UVR protection provided by shade structures in New Zealand primary schools.

    Science.gov (United States)

    Gies, Peter; Mackay, Christina

    2004-01-01

    To reduce ultraviolet radiation (UVR) exposure during childhood, shade structures are being erected in primary schools to provide areas where children can more safely undertake outdoor activities. This study to evaluate the effectiveness of existing and purpose built shade structures in providing solar UVR protection was carried out on 29 such structures in 10 schools in New Zealand. Measurements of the direct and scattered solar UVR doses within the central region of the shade structures were made during the school lunch break period using UVR-sensitive polysulfone film badges. These measurements indicate that many of the structures had UVR protection factors (PF) of 4-8, which was sufficient to provide protection during the school lunch hour. However, of the 29 structures examined, only six would meet the suggested requirements of UVR PF greater than 15 required to provide all-day protection.

  14. Estimation of magnetic field in a region from measurements of the field at discrete points

    International Nuclear Information System (INIS)

    Alexopoulos, Theodore; Dris, Manolis; Lucas, Demetrios.

    1984-12-01

    A method is given to estimate the magnetic field in a region from measurements of the field in its surface and its interior. The method might be useful in high energy physics and other experiments that use large area magnets. (author)

  15. Development of a method for estimating oesophageal temperature by multi-locational temperature measurement inside the external auditory canal

    Science.gov (United States)

    Nakada, Hirofumi; Horie, Seichi; Kawanami, Shoko; Inoue, Jinro; Iijima, Yoshinori; Sato, Kiyoharu; Abe, Takeshi

    2017-09-01

    We aimed to develop a practical method to estimate oesophageal temperature by measuring multi-locational auditory canal temperatures. This method can be applied to prevent heatstroke by simultaneously and continuously monitoring the core temperatures of people working under hot environments. We asked 11 healthy male volunteers to exercise, generating 80 W for 45 min in a climatic chamber set at 24, 32 and 40 °C, at 50% relative humidity. We also exposed the participants to radiation at 32 °C. We continuously measured temperatures at the oesophagus, rectum and three different locations along the external auditory canal. We developed equations for estimating oesophageal temperatures from auditory canal temperatures and compared their fitness and errors. The rectal temperature increased or decreased faster than oesophageal temperature at the start or end of exercise in all conditions. Estimated temperature showed good similarity with oesophageal temperature, and the square of the correlation coefficient of the best fitting model reached 0.904. We observed intermediate values between rectal and oesophageal temperatures during the rest phase. Even under the condition with radiation, estimated oesophageal temperature demonstrated concordant movement with oesophageal temperature at around 0.1 °C overestimation. Our method measured temperatures at three different locations along the external auditory canal. We confirmed that the approach can credibly estimate the oesophageal temperature from 24 to 40 °C for people performing exercise in the same place in a windless environment.

  16. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  17. A new ore reserve estimation method, Yang Chizhong filtering and inferential measurement method, and its application

    International Nuclear Information System (INIS)

    Wu Jingqin.

    1989-01-01

    Yang Chizhong filtering and inferential measurement method is a new method used for variable statistics of ore deposits. In order to apply this theory to estimate the uranium ore reserves under the circumstances of regular or irregular prospecting grids, small ore bodies, less sampling points, and complex occurrence, the author has used this method to estimate the ore reserves in five ore bodies of two deposits and achieved satisfactory results. It is demonstrated that compared with the traditional block measurement method, this method is simple and clear in formula, convenient in application, rapid in calculation, accurate in results, less expensive, and high economic benefits. The procedure and experience in the application of this method and the preliminary evaluation of its results are mainly described

  18. Monitoring renal function in children with Fabry disease: comparisons of measured and creatinine-based estimated glomerular filtration rate

    NARCIS (Netherlands)

    Tøndel, Camilla; Ramaswami, Uma; Aakre, Kristin Moberg; Wijburg, Frits; Bouwman, Machtelt; Svarstad, Einar

    2010-01-01

    Studies on renal function in children with Fabry disease have mainly been done using estimated creatinine-based glomerular filtration rate (GFR). The aim of this study was to compare estimated creatinine-based GFR (eGFR) with measured GFR (mGFR) in children with Fabry disease and normal renal

  19. Measurement and estimation of photosynthetically active radiation from 1961 to 2011 in Central China

    International Nuclear Information System (INIS)

    Wang, Lunche; Gong, Wei; Li, Chen; Lin, Aiwen; Hu, Bo; Ma, Yingying

    2013-01-01

    Highlights: • 6-Year observations were used to show the temporal variability of PAR and PAR/G. • Dependence of PAR on clearness index was studied in model development. • New developed models performed very well at different time scales. • The new all-weather model provided good estimates of PAR at two other sites. • Long-term variations of PAR from 1961 to 2011 in Central China were analyzed. - Abstract: Measurements of photosynthetically active radiation (PAR) and global solar radiation (G) at WHU, Central China during 2006–2011 were used to investigate the seasonal characteristics of PAR and PAR/G (PAR fraction). Both PAR and PAR fraction showed similar seasonal features that peaked in values during summer and reached their lowest in winter with annual mean values being 22.39 mol m −2 d −1 and 1.9 mol M J −1 respectively. By analyzing the dependence of PAR on cosine of solar zenith angle and clearness index at WHU, an efficient all-weather model was developed for estimating PAR values under various sky conditions, which also produced accepted estimations with high accuracy at Lhasa and Fukang. PAR dataset was then reconstructed from G for 1961–2011 through the new developed model. Annual mean daily PAR was about 23.12 mol m −2 d −1 , there was a significant decreasing trend (11.2 mol m −2 per decade) during the last 50 years in Central China, the decreases were sharpest in summer (−24.67 mol m −2 per decade) with relatively small decreases being observed in spring. Meanwhile, results also revealed that PAR began to increase at a rate of 0.1 mol m −2 per year from 1991 to 2011, which was in consistent with variation patterns of global solar radiation in the study area. The proposed all-weather PAR model would be of vital importance for ecological modeling, atmospheric environment, agricultural processes and solar energy application

  20. Traffic measurement for big network data

    CERN Document Server

    Chen, Shigang; Xiao, Qingjun

    2017-01-01

    This book presents several compact and fast methods for online traffic measurement of big network data. It describes challenges of online traffic measurement, discusses the state of the field, and provides an overview of the potential solutions to major problems. The authors introduce the problem of per-flow size measurement for big network data and present a fast and scalable counter architecture, called Counter Tree, which leverages a two-dimensional counter sharing scheme to achieve far better memory efficiency and significantly extend estimation range. Unlike traditional approaches to cardinality estimation problems that allocate a separated data structure (called estimator) for each flow, this book takes a different design path by viewing all the flows together as a whole: each flow is allocated with a virtual estimator, and these virtual estimators share a common memory space. A framework of virtual estimators is designed to apply the idea of sharing to an array of cardinality estimation solutions, achi...

  1. PREMATH: a Precious-Material Holdup Estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.; Bruns, D.D.

    1982-01-01

    A computer program, PREMATH (Precious Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. PREMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels - including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, PREMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balances for thorium (a less valuable material than uranium) during steady-state process operation

  2. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.

    1981-01-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. NUMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels-including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, NUMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balance for thorium during steady-state process operation

  3. GPS Estimates of Integrated Precipitable Water Aid Weather Forecasters

    Science.gov (United States)

    Moore, Angelyn W.; Gutman, Seth I.; Holub, Kirk; Bock, Yehuda; Danielson, David; Laber, Jayme; Small, Ivory

    2013-01-01

    Global Positioning System (GPS) meteorology provides enhanced density, low-latency (30-min resolution), integrated precipitable water (IPW) estimates to NOAA NWS (National Oceanic and Atmospheric Adminis tration Nat ional Weather Service) Weather Forecast Offices (WFOs) to provide improved model and satellite data verification capability and more accurate forecasts of extreme weather such as flooding. An early activity of this project was to increase the number of stations contributing to the NOAA Earth System Research Laboratory (ESRL) GPS meteorology observing network in Southern California by about 27 stations. Following this, the Los Angeles/Oxnard and San Diego WFOs began using the enhanced GPS-based IPW measurements provided by ESRL in the 2012 and 2013 monsoon seasons. Forecasters found GPS IPW to be an effective tool in evaluating model performance, and in monitoring monsoon development between weather model runs for improved flood forecasting. GPS stations are multi-purpose, and routine processing for position solutions also yields estimates of tropospheric zenith delays, which can be converted into mm-accuracy PWV (precipitable water vapor) using in situ pressure and temperature measurements, the basis for GPS meteorology. NOAA ESRL has implemented this concept with a nationwide distribution of more than 300 "GPSMet" stations providing IPW estimates at sub-hourly resolution currently used in operational weather models in the U.S.

  4. RNM and CRITER projects: providing access to environmental radioactivity measurements during crisis and in peacetime

    Energy Technology Data Exchange (ETDEWEB)

    Leprieur, F.; Couvez, C.; Manificat, G. [Institut de radioprotection et de surete nucleaire (France)

    2014-07-01

    The multiplicity of actors and sources of information makes it difficult to centralize environmental radioactivity measurements and to provide access to experts and policy makers, but also to the general public. In the event of a radiological accident, many additional measures will also be carried out in the field by those involved in crisis management. In order to answer this problem, two projects were launched by IRSN with the aim of developing tools to centralize information on environmental radioactivity in normal situation (RNM project: National network of radioactive measurements) and during radiological crisis (CRITER project: Crisis and field). The RNM's mission is to contribute to the estimation of doses from ionizing radiation to which people are exposed and to inform the public. In order to achieve this goal, this network collects and makes available to the public the results of measurements of environmental radioactivity obtained in a normal situation by the French stakeholders. More than 18,000 measurements are transmitted each month by all producers to the RNM. After more than 4 years of operation, the database contains nearly 1,200,000 results. The opening in 2010 of the public web site (www.mesure-radioactivite.fr) was also a major step forward toward transparency and information. In case of radiological emergency, IRSN's mission is to centralize and process at the national level, in a database, all the results of measurements or analysis by all stakeholders throughout the crisis, in order to precisely determine the radiological situation of the environment, before, during and after the event. The project CRITER therefore involves the collection of all possible data from all potential sources, transmission, organization, and the publication of the measurements in crisis or post-accident situation. The emergency nature of the situation requires a transmission in near real-time data, facilitated by the development of automatic sensors. For

  5. Development of realtime cognitive state estimator

    International Nuclear Information System (INIS)

    Takahashi, Makoto; Kitamura, Masashi; Yoshikaea, Hidekazu

    2004-01-01

    The realtime cognitive state estimator based on the set of physiological measures has been developed in order to provide valuable information on the human behavior during the interaction through the Man-Machine Interface. The artificial neural network has been adopted to categorize the cognitive states by using the qualitative physiological data pattern as the inputs. The laboratory experiments, in which the subjects' cognitive states were intentionally controlled by the task presented, were performed to obtain training data sets for the neural network. The developed system has been shown to be capable of estimating cognitive state with higher accuracy and realtime estimation capability has also been confirmed through the data processing experiments. (author)

  6. Estimating biophysical properties of coffee (Coffea canephora) plants with above-canopy field measurements, using CropSpec®

    Science.gov (United States)

    Putra, Bayu T. Widjaja; Soni, Peeyush; Morimoto, Eiji; Pujiyanto, Pujiyanto

    2018-04-01

    Remote sensing technologies have been applied to many crops, but tree crops like Robusta coffee (Coffea canephora) under shade conditions require additional attention while making above-canopy measurements. The objective of this study was to determine how well chlorophyll and nitrogen status of Robusta coffee plants can be estimated with the laser-based (CropSpec®) active sensor. This study also identified appropriate vegetation indices for estimating Nitrogen content by above-canopy measurement, using near-infra red and red-edge bands. Varying light intensity and different background of the plants were considered in developing the indices. Field experiments were conducted involving different non-destructive tools (CropSpec® and SPAD-502 chlorophyll meter). Subsequently, Kjeldahl laboratory analyses were performed to determine the actual Nitrogen content of the plants with different ages and field conditions used in the non-destructive previous stage. Measurements were undertaken for assessing the biophysical properties of tree plant. The usefulness of near-infrared and red-edge bands from these sensors in measuring critical nitrogen levels of coffee plants by above-canopy measurement are investigated in this study.

  7. Proper orthogonal decomposition-based estimations of the flow field from particle image velocimetry wall-gradient measurements in the backward-facing step flow

    International Nuclear Information System (INIS)

    Nguyen, Thien Duy; Wells, John Craig; Mokhasi, Paritosh; Rempfer, Dietmar

    2010-01-01

    In this paper, particle image velocimetry (PIV) results from the recirculation zone of a backward-facing step flow, of which the Reynolds number is 2800 based on bulk velocity upstream of the step and step height (h = 16.5 mm), are used to demonstrate the capability of proper orthogonal decomposition (POD)-based measurement models. Three-component PIV velocity fields are decomposed by POD into a set of spatial basis functions and a set of temporal coefficients. The measurement models are built to relate the low-order POD coefficients, determined from an ensemble of 1050 PIV fields by the 'snapshot' method, to the time-resolved wall gradients, measured by a near-wall measurement technique called stereo interfacial PIV. These models are evaluated in terms of reconstruction and prediction of the low-order temporal POD coefficients of the velocity fields. In order to determine the estimation coefficients of the measurement models, linear stochastic estimation (LSE), quadratic stochastic estimation (QSE), principal component regression (PCR) and kernel ridge regression (KRR) are applied. We denote such approaches as LSE-POD, QSE-POD, PCR-POD and KRR-POD. In addition to comparing the accuracy of measurement models, we introduce multi-time POD-based estimations in which past and future information of the wall-gradient events is used separately or combined. The results show that the multi-time estimation approaches can improve the prediction process. Among these approaches, the proposed multi-time KRR-POD estimation with an optimized window of past wall-gradient information yields the best prediction. Such a multi-time KRR-POD approach offers a useful tool for real-time flow estimation of the velocity field based on wall-gradient data

  8. A simple visual estimation of food consumption in carnivores.

    Directory of Open Access Journals (Sweden)

    Katherine R Potgieter

    Full Text Available Belly-size ratings or belly scores are frequently used in carnivore research as a method of rating whether and how much an animal has eaten. This method provides only a rough ordinal measure of fullness and does not quantify the amount of food an animal has consumed. Here we present a method for estimating the amount of meat consumed by individual African wild dogs Lycaon pictus. We fed 0.5 kg pieces of meat to wild dogs being temporarily held in enclosures and measured the corresponding change in belly size using lateral side photographs taken perpendicular to the animal. The ratio of belly depth to body length was positively related to the mass of meat consumed and provided a useful estimate of the consumption. Similar relationships could be calculated to determine amounts consumed by other carnivores, thus providing a useful tool in the study of feeding behaviour.

  9. High-global warming potential F-gas emissions in California: comparison of ambient-based versus inventory-based emission estimates, and implications of refined estimates.

    Science.gov (United States)

    Gallagher, Glenn; Zhan, Tao; Hsu, Ying-Kuang; Gupta, Pamela; Pederson, James; Croes, Bart; Blake, Donald R; Barletta, Barbara; Meinardi, Simone; Ashford, Paul; Vetter, Arnie; Saba, Sabine; Slim, Rayan; Palandre, Lionel; Clodic, Denis; Mathis, Pamela; Wagner, Mark; Forgie, Julia; Dwyer, Harry; Wolf, Katy

    2014-01-21

    To provide information for greenhouse gas reduction policies, the California Air Resources Board (CARB) inventories annual emissions of high-global-warming potential (GWP) fluorinated gases, the fastest growing sector of greenhouse gas (GHG) emissions globally. Baseline 2008 F-gas emissions estimates for selected chlorofluorocarbons (CFC-12), hydrochlorofluorocarbons (HCFC-22), and hydrofluorocarbons (HFC-134a) made with an inventory-based methodology were compared to emissions estimates made by ambient-based measurements. Significant discrepancies were found, with the inventory-based emissions methodology resulting in a systematic 42% under-estimation of CFC-12 emissions from older refrigeration equipment and older vehicles, and a systematic 114% overestimation of emissions for HFC-134a, a refrigerant substitute for phased-out CFCs. Initial, inventory-based estimates for all F-gas emissions had assumed that equipment is no longer in service once it reaches its average lifetime of use. Revised emission estimates using improved models for equipment age at end-of-life, inventories, and leak rates specific to California resulted in F-gas emissions estimates in closer agreement to ambient-based measurements. The discrepancies between inventory-based estimates and ambient-based measurements were reduced from -42% to -6% for CFC-12, and from +114% to +9% for HFC-134a.

  10. Areal rainfall estimation using moving cars - computer experiments including hydrological modeling

    Science.gov (United States)

    Rabiei, Ehsan; Haberlandt, Uwe; Sester, Monika; Fitzner, Daniel; Wallner, Markus

    2016-09-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rain rate. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e., RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance but also for use in hydrological modeling. Considering measurement errors derived from laboratory experiments, the result shows that the RCs provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Moreover, by testing larger uncertainties for RCs, they observed to be useful up to a certain level for areal rainfall estimation and discharge simulation.

  11. Kinetic parameter estimation from SPECT cone-beam projection measurements

    International Nuclear Information System (INIS)

    Huesman, Ronald H.; Reutter, Bryan W.; Zeng, G. Larry; Gullberg, Grant T.

    1998-01-01

    Kinetic parameters are commonly estimated from dynamically acquired nuclear medicine data by first reconstructing a dynamic sequence of images and subsequently fitting the parameters to time-activity curves generated from regions of interest overlaid upon the image sequence. Biased estimates can result from images reconstructed using inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system. If the SPECT data are acquired using cone-beam collimators wherein the gantry rotates so that the focal point of the collimators always remains in a plane, additional biases can arise from images reconstructed using insufficient, as well as truncated, projection samples. To overcome these problems we have investigated the estimation of kinetic parameters directly from SPECT cone-beam projection data by modelling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated chest image volume, kinetic parameters were estimated for simple one-compartment models for four myocardial regions of interest. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated cone-beam data had biases ranging between 3-26% and 0-28%, respectively. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Statistical uncertainties of parameter estimates for 10 000 000 events ranged between 0.2-9% for the uptake parameters and between 0.3-6% for the washout parameters. (author)

  12. Cooperative Robot Localization Using Event-Triggered Estimation

    Science.gov (United States)

    Iglesias Echevarria, David I.

    It is known that multiple robot systems that need to cooperate to perform certain activities or tasks incur in high energy costs that hinder their autonomous functioning and limit the benefits provided to humans by these kinds of platforms. This work presents a communications-based method for cooperative robot localization. Implementing concepts from event-triggered estimation, used with success in the field of wireless sensor networks but rarely to do robot localization, agents are able to only send measurements to their neighbors when the expected novelty in this information is high. Since all agents know the condition that triggers a measurement to be sent or not, the lack of a measurement is therefore informative and fused into state estimates. In the case agents do not receive either direct nor indirect measurements of all others, the agents employ a covariance intersection fusion rule in order to keep the local covariance error metric bounded. A comprehensive analysis of the proposed algorithm and its estimation performance in a variety of scenarios is performed, and the algorithm is compared to similar cooperative localization approaches. Extensive simulations are performed that illustrate the effectiveness of this method.

  13. General problems of metrology and indirect measuring in cardiology: error estimation criteria for indirect measurements of heart cycle phase durations

    Directory of Open Access Journals (Sweden)

    Konstantine K. Mamberger

    2012-11-01

    Full Text Available Aims This paper treats general problems of metrology and indirect measurement methods in cardiology. It is aimed at an identification of error estimation criteria for indirect measurements of heart cycle phase durations. Materials and methods A comparative analysis of an ECG of the ascending aorta recorded with the use of the Hemodynamic Analyzer Cardiocode (HDA lead versus conventional V3, V4, V5, V6 lead system ECGs is presented herein. Criteria for heart cycle phase boundaries are identified with graphic mathematical differentiation. Stroke volumes of blood SV calculated on the basis of the HDA phase duration measurements vs. echocardiography data are compared herein. Results The comparative data obtained in the study show an averaged difference at the level of 1%. An innovative noninvasive measuring technology originally developed by a Russian R & D team offers measuring stroke volume of blood SV with a high accuracy. Conclusion In practice, it is necessary to take into account some possible errors in measurements caused by hardware. Special attention should be paid to systematic errors.

  14. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-06

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.

  15. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    Energy Technology Data Exchange (ETDEWEB)

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  16. Estimating drain flow from measured water table depth in layered soils under free and controlled drainage

    Science.gov (United States)

    Saadat, Samaneh; Bowling, Laura; Frankenberger, Jane; Kladivko, Eileen

    2018-01-01

    Long records of continuous drain flow are important for quantifying annual and seasonal changes in the subsurface drainage flow from drained agricultural land. Missing data due to equipment malfunction and other challenges have limited conclusions that can be made about annual flow and thus nutrient loads from field studies, including assessments of the effect of controlled drainage. Water table depth data may be available during gaps in flow data, providing a basis for filling missing drain flow data; therefore, the overall goal of this study was to examine the potential to estimate drain flow using water table observations. The objectives were to evaluate how the shape of the relationship between drain flow and water table height above drain varies depending on the soil hydraulic conductivity profile, to quantify how well the Hooghoudt equation represented the water table-drain flow relationship in five years of measured data at the Davis Purdue Agricultural Center (DPAC), and to determine the impact of controlled drainage on drain flow using the filled dataset. The shape of the drain flow-water table height relationship was found to depend on the selected hydraulic conductivity profile. Estimated drain flow using the Hooghoudt equation with measured water table height for both free draining and controlled periods compared well to observed flow with Nash-Sutcliffe Efficiency values above 0.7 and 0.8 for calibration and validation periods, respectively. Using this method, together with linear regression for the remaining gaps, a long-term drain flow record for a controlled drainage experiment at the DPAC was used to evaluate the impacts of controlled drainage on drain flow. In the controlled drainage sites, annual flow was 14-49% lower than free drainage.

  17. Estimation of forest aboveground biomass and uncertainties by integration of field measurements, airborne LiDAR, and SAR and optical satellite data in Mexico.

    Science.gov (United States)

    Urbazaev, Mikhail; Thiel, Christian; Cremer, Felix; Dubayah, Ralph; Migliavacca, Mirco; Reichstein, Markus; Schmullius, Christiane

    2018-02-21

    Information on the spatial distribution of aboveground biomass (AGB) over large areas is needed for understanding and managing processes involved in the carbon cycle and supporting international policies for climate change mitigation and adaption. Furthermore, these products provide important baseline data for the development of sustainable management strategies to local stakeholders. The use of remote sensing data can provide spatially explicit information of AGB from local to global scales. In this study, we mapped national Mexican forest AGB using satellite remote sensing data and a machine learning approach. We modelled AGB using two scenarios: (1) extensive national forest inventory (NFI), and (2) airborne Light Detection and Ranging (LiDAR) as reference data. Finally, we propagated uncertainties from field measurements to LiDAR-derived AGB and to the national wall-to-wall forest AGB map. The estimated AGB maps (NFI- and LiDAR-calibrated) showed similar goodness-of-fit statistics (R 2 , Root Mean Square Error (RMSE)) at three different scales compared to the independent validation data set. We observed different spatial patterns of AGB in tropical dense forests, where no or limited number of NFI data were available, with higher AGB values in the LiDAR-calibrated map. We estimated much higher uncertainties in the AGB maps based on two-stage up-scaling method (i.e., from field measurements to LiDAR and from LiDAR-based estimates to satellite imagery) compared to the traditional field to satellite up-scaling. By removing LiDAR-based AGB pixels with high uncertainties, it was possible to estimate national forest AGB with similar uncertainties as calibrated with NFI data only. Since LiDAR data can be acquired much faster and for much larger areas compared to field inventory data, LiDAR is attractive for repetitive large scale AGB mapping. In this study, we showed that two-stage up-scaling methods for AGB estimation over large areas need to be analyzed and validated

  18. Efficient multidimensional regularization for Volterra series estimation

    Science.gov (United States)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  19. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    Science.gov (United States)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  20. Estimation of hospital efficiency--do different definitions and casemix measures for hospital output affect the results?

    Science.gov (United States)

    Vitikainen, Kirsi; Street, Andrew; Linna, Miika

    2009-02-01

    Hospital efficiency has been the subject of numerous health economics studies, but there is little evidence on how the chosen output and casemix measures affect the efficiency results. The aim of this study is to examine the robustness of efficiency results due to these factors. Comparison is made between activities and episode output measures, and two different output grouping systems (Classic and FullDRG). Non-parametric data envelopment analysis is used as an analysis technique. The data consist of all public acute care hospitals in Finland in 2005 (n=40). Efficiency estimates were not found to be highly sensitive to the choice between episode and activity descriptions of output, but more so to the choice of DRG grouping system. Estimates are most sensitive to scale assumptions, with evidence of decreasing returns to scale in larger hospitals. Episode measures are generally to be preferred to activity measures because these better capture the patient pathway, while FullDRGs are preferred to Classic DRGs particularly because of the better description of outpatient output in the former grouping system. Attention should be paid to reducing the extent of scale inefficiency in Finland.

  1. Optimized Estimation of Surface Layer Characteristics from Profiling Measurements

    Directory of Open Access Journals (Sweden)

    Doreene Kang

    2016-01-01

    Full Text Available New sampling techniques such as tethered-balloon-based measurements or small unmanned aerial vehicles are capable of providing multiple profiles of the Marine Atmospheric Surface Layer (MASL in a short time period. It is desirable to obtain surface fluxes from these measurements, especially when direct flux measurements are difficult to obtain. The profiling data is different from the traditional mean profiles obtained at two or more fixed levels in the surface layer from which surface fluxes of momentum, sensible heat, and latent heat are derived based on Monin-Obukhov Similarity Theory (MOST. This research develops an improved method to derive surface fluxes and the corresponding MASL mean profiles of wind, temperature, and humidity with a least-squares optimization method using the profiling measurements. This approach allows the use of all available independent data. We use a weighted cost function based on the framework of MOST with the cost being optimized using a quasi-Newton method. This approach was applied to seven sets of data collected from the Monterey Bay. The derived fluxes and mean profiles show reasonable results. An empirical bias analysis is conducted using 1000 synthetic datasets to evaluate the robustness of the method.

  2. Estimating snowpack density from Albedo measurement

    Science.gov (United States)

    James L. Smith; Howard G. Halverson

    1979-01-01

    Snow is a major source of water in Western United States. Data on snow depth and average snowpack density are used in mathematical models to predict water supply. In California, about 75 percent of the snow survey sites above 2750-meter elevation now used to collect data are in statutory wilderness areas. There is need for a method of estimating the water content of a...

  3. Optimal estimation of entanglement in optical qubit systems

    International Nuclear Information System (INIS)

    Brida, Giorgio; Degiovanni, Ivo P.; Florio, Angela; Genovese, Marco; Meda, Alice; Shurupov, Alexander P.; Giorda, Paolo; Paris, Matteo G. A.

    2011-01-01

    We address the experimental determination of entanglement for systems made of a pair of polarization qubits. We exploit quantum estimation theory to derive optimal estimators, which are then implemented to achieve ultimate bound to precision. In particular, we present a set of experiments aimed at measuring the amount of entanglement for states belonging to different families of pure and mixed two-qubit two-photon states. Our scheme is based on visibility measurements of quantum correlations and achieves the ultimate precision allowed by quantum mechanics in the limit of Poissonian distribution of coincidence counts. Although optimal estimation of entanglement does not require the full tomography of the states we have also performed state reconstruction using two different sets of tomographic projectors and explicitly shown that they provide a less precise determination of entanglement. The use of optimal estimators also allows us to compare and statistically assess the different noise models used to describe decoherence effects occurring in the generation of entanglement.

  4. Surface Renewal Application for Estimating Evapotranspiration: A Review

    Directory of Open Access Journals (Sweden)

    Yongguang Hu

    2018-01-01

    Full Text Available The estimation of evapotranspiration (ET is essential for meteorological modeling of surface exchange processes, as well as for the agricultural practice of irrigation management. Hitherto, a number of methods for estimation of ET at different temporal scales and climatic conditions are constantly under investigation and improvement. One of these methods is surface renewal (SR. Therefore, the premise of this review is to present recent developments and applications of SR for ET measurements. The SR method is based on estimating the turbulent exchange of sensible heat flux between plant canopy and atmosphere caused by the instantaneous replacement of air parcels in contact with the surface. Additional measurements of net radiation and soil heat flux facilitate extracting ET using the shortened energy balance equation. The challenge, however, is the calibration of SR results against direct sensible heat flux measurements. For the classical SR method, only air temperature measured at high frequency is required. In addition, a new model suggests that the SR method could be exempted from calibration by measuring additional micrometeorological variables. However, further improvement of the SR method is required to provide improved results in the future.

  5. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    Science.gov (United States)

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  6. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    Science.gov (United States)

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2017-09-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).

  7. Methods of protective measures analysis in agriculture on contaminated lands: estimation of effectiveness, intervention levels and comparison of different countermeasures

    International Nuclear Information System (INIS)

    Yatsalo, B.I.; Aleksakhin, R.M.

    1997-01-01

    Methodological aspects of the analysis of protective measures in agriculture in the long-term period of liquidation of the consequences of a nuclear accident are considered. Examples of the estimations of countermeasure effectiveness with the use of the cost-benefit analysis, as well as methods of the estimation of intervention levels and examples of a comparison protective measures with the use of several criteria of effectiveness are discussed

  8. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  9. Coherence in quantum estimation

    Science.gov (United States)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  10. Estimating Wear Of Installed Ball Bearings

    Science.gov (United States)

    Keba, John E.; Mcvey, Scott E.

    1993-01-01

    Simple inspection and measurement technique makes possible to estimate wear of balls in ball bearing, without removing bearing from shaft on which installed. To perform measurement, one observes bearing cage while turning shaft by hand to obtain integral number of cage rotations and to measure, to nearest 2 degrees, number of shaft rotations producing cage rotations. Ratio between numbers of cages and shaft rotations depends only on internal geometry of bearing and applied load. Changes in turns ratio reflect changes in internal geometry of bearing provided measurements made with similar bearing loads. By assuming all wear occurs on balls, one computes effective value for this wear from change in turns ratio.

  11. Guideline to Estimate Decommissioning Costs

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Taesik; Kim, Younggook; Oh, Jaeyoung [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The primary objective of this work is to provide guidelines to estimate the decommissioning cost as well as the stakeholders with plausible information to understand the decommissioning activities in a reasonable manner, which eventually contribute to acquiring the public acceptance for the nuclear power industry. Although several cases of the decommissioning cost estimate have been made for a few commercial nuclear power plants, the different technical, site-specific and economic assumptions used make it difficult to interpret those cost estimates and compare them with that of a relevant plant. Trustworthy cost estimates are crucial to plan a safe and economic decommissioning project. The typical approach is to break down the decommissioning project into a series of discrete and measurable work activities. Although plant specific differences derived from the economic and technical assumptions make a licensee difficult to estimate reliable decommissioning costs, estimating decommissioning costs is the most crucial processes since it encompasses all the spectrum of activities from the planning to the final evaluation on whether a decommissioning project has successfully been preceded from the perspective of safety and economic points. Hence, it is clear that tenacious efforts should be needed to successfully perform the decommissioning project.

  12. Towards continuous global measurements and optimal emission estimates of NF3

    Science.gov (United States)

    Arnold, T.; Muhle, J.; Salameh, P.; Harth, C.; Ivy, D. J.; Weiss, R. F.

    2011-12-01

    We present an analytical method for the continuous in situ measurement of nitrogen trifluoride (NF3) - an anthropogenic gas with a global warming potential of ~16800 over a 100 year time horizon. NF3 is not included in national reporting emissions inventories under the United Nations Framework Convention on Climate Change (UNFCCC). However, it is a rapidly emerging greenhouse gas due to emission from a growing number of manufacturing facilities with increasing output and modern end-use applications, namely in microcircuit etching, and in production of flat panel displays and thin-film photovoltaic cells. Despite success in measuring the most volatile long lived halogenated species such as CF4, the Medusa preconcentration GC/MS system of Miller et al. (2008) is unable to detect NF3 under remote operation. Using altered techniques of gas separation and chromatography after initial preconcentration, we are now able to make continuous atmospheric measurements of NF3 with average precisions NF3 produced. Emission factors are shown to have reduced over the last decade; however, rising production and end-use have caused the average global atmospheric concentration to double between 2005 and 2011 i.e. half the atmospheric NF3 present today originates from emissions after 2005. Finally we show the first continuous in situ measurements from La Jolla, California, illustrating how global deployment of our technique could improve the temporal and spatial scale of NF3 'top-down' emission estimates over the coming years. These measurements will be important for independent verification of emissions should NF3 be regulated under a new climate treaty.

  13. Estimation of spectral kurtosis

    Science.gov (United States)

    Sutawanir

    2017-03-01

    Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to

  14. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    Science.gov (United States)

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  15. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  16. Are estimates of wind characteristics based on measurements with Pitot tubes and GNSS receivers mounted on consumer-grade unmanned aerial vehicles applicable in meteorological studies?

    Science.gov (United States)

    Niedzielski, Tomasz; Skjøth, Carsten; Werner, Małgorzata; Spallek, Waldemar; Witek, Matylda; Sawiński, Tymoteusz; Drzeniecka-Osiadacz, Anetta; Korzystka-Muskała, Magdalena; Muskała, Piotr; Modzel, Piotr; Guzikowski, Jakub; Kryza, Maciej

    2017-09-01

    The objective of this paper is to empirically show that estimates of wind speed and wind direction based on measurements carried out using the Pitot tubes and GNSS receivers, mounted on consumer-grade unmanned aerial vehicles (UAVs), may accurately approximate true wind parameters. The motivation for the study is that a growing number of commercial and scientific UAV operations may soon become a new source of data on wind speed and wind direction, with unprecedented spatial and temporal resolution. The feasibility study was carried out within an isolated mountain meadow of Polana Izerska located in the Izera Mountains (SW Poland) during an experiment which aimed to compare wind characteristics measured by several instruments: three UAVs (swinglet CAM, eBee, Maja) equipped with the Pitot tubes and GNSS receivers, wind speed and direction meters mounted at 2.5 and 10 m (mast), conventional weather station and vertical sodar. The three UAVs performed seven missions along spiral-like trajectories, most reaching 130 m above take-off location. The estimates of wind speed and wind direction were found to agree between UAVs. The time series of wind speed measured at 10 m were extrapolated to flight altitudes recorded at a given time so that a comparison was made feasible. It was found that the wind speed estimates provided by the UAVs on a basis of the Pitot tube/GNSS data are in agreement with measurements carried out using dedicated meteorological instruments. The discrepancies were recorded in the first and last phases of UAV flights.

  17. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    Science.gov (United States)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-07-01

    An extension of the point kinetics model is developed to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. The spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  18. Estimating the dynamics of groundwater input into the coastal zone via continuous radon-222 measurements

    International Nuclear Information System (INIS)

    Burnett, William C.; Dulaiova, Henrieta

    2003-01-01

    Submarine groundwater discharge (SGD) into the coastal zone has received increased attention in the last few years as it is now recognized that this process represents an important pathway for material transport. Assessing these material fluxes is difficult, as there is no simple means to gauge the water flux. To meet this challenge, we have explored the use of a continuous radon monitor to measure radon concentrations in coastal zone waters over time periods from hours to days. Changes in the radon inventories over time can be converted to fluxes after one makes allowances for tidal effects, losses to the atmosphere, and mixing with offshore waters. If one assumes that advective flow of radon-enriched groundwater (pore waters) represent the main input of 222 Rn in the coastal zone, the calculated radon fluxes may be converted to water fluxes by dividing by the estimated or measured 222 Rn pore water activity. We have also used short-lived radium isotopes ( 223 Ra and 224 Ra) to assess mixing between near-shore and offshore waters in the manner pioneered by . During an experiment in the coastal Gulf of Mexico, we showed that the mixing loss derived from the 223 Ra gradient agreed very favorably to the estimated range based on the calculated radon fluxes. This allowed an independent constraint on the mixing loss of radon--an important parameter in the mass balance approach. Groundwater discharge was also estimated independently by the radium isotopic approach and was within a factor of two of that determined by the continuous radon measurements and an automated seepage meter deployed at the same site

  19. Thermodynamic estimation: Ionic materials

    International Nuclear Information System (INIS)

    Glasser, Leslie

    2013-01-01

    Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy

  20. Deriving a light use efficiency estimation algorithm using in situ hyperspectral and eddy covariance measurements for a maize canopy in Northeast China.

    Science.gov (United States)

    Zhang, Feng; Zhou, Guangsheng

    2017-07-01

    We estimated the light use efficiency ( LUE ) via vegetation canopy chlorophyll content ( CCC canopy ) based on in situ measurements of spectral reflectance, biophysical characteristics, ecosystem CO 2 fluxes and micrometeorological factors over a maize canopy in Northeast China. The results showed that among the common chlorophyll-related vegetation indices (VIs), CCC canopy had the most obviously exponential relationships with the red edge position (REP) ( R 2  = .97, p  <   .001) and normalized difference vegetation index (NDVI) ( R 2  = .91, p  <   .001). In a comparison of the indicating performances of NDVI, ratio vegetation index (RVI), wide dynamic range vegetation index (WDRVI), and 2-band enhanced vegetation index (EVI2) when estimating CCC canopy using all of the possible combinations of two separate wavelengths in the range 400-1300 nm, EVI2 [1214, 1259] and EVI2 [726, 1248] were better indicators, with R 2 values of .92 and .90 ( p  <   .001). Remotely monitoring LUE through estimating CCC canopy derived from field spectrometry data provided accurate prediction of midday gross primary productivity ( GPP ) in a rainfed maize agro-ecosystem ( R 2  = .95, p  <   .001). This study provides a new paradigm for monitoring vegetation GPP based on the combination of LUE models with plant physiological properties.