WorldWideScience

Sample records for measured providing estimates

  1. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  2. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  3. Location Estimation using Delayed Measurements

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Nørgård, Peter Magnus

    1998-01-01

    When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new techn...... technique to a sensor fusion system for estimating the location of an autonomous guided vehicle. The system fuses encoder and vision measurements in an extended Kalman filter. Results from experiments in a real environment are reported...

  4. Measuring Prefered Services from Cloud Computing Providers ...

    African Journals Online (AJOL)

    pc

    2018, 10(5S), 207-212. 207. Measuring Prefered Services from ... Published online: 22 March 2018 .... and then introduces a general service selection and ranking model with QoS ..... To facilitate add, remove, and prioritize services in election.

  5. Adaptive measurement selection for progressive damage estimation

    Science.gov (United States)

    Zhou, Wenfan; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Chattopadhyay, Aditi; Peralta, Pedro

    2011-04-01

    Noise and interference in sensor measurements degrade the quality of data and have a negative impact on the performance of structural damage diagnosis systems. In this paper, a novel adaptive measurement screening approach is presented to automatically select the most informative measurements and use them intelligently for structural damage estimation. The method is implemented efficiently in a sequential Monte Carlo (SMC) setting using particle filtering. The noise suppression and improved damage estimation capability of the proposed method is demonstrated by an application to the problem of estimating progressive fatigue damage in an aluminum compact-tension (CT) sample using noisy PZT sensor measurements.

  6. Estimating the Cost of Providing Foundational Public Health Services.

    Science.gov (United States)

    Mamaril, Cezar Brian C; Mays, Glen P; Branham, Douglas Keith; Bekemeier, Betty; Marlowe, Justin; Timsina, Lava

    2017-12-28

    To estimate the cost of resources required to implement a set of Foundational Public Health Services (FPHS) as recommended by the Institute of Medicine. A stochastic simulation model was used to generate probability distributions of input and output costs across 11 FPHS domains. We used an implementation attainment scale to estimate costs of fully implementing FPHS. We use data collected from a diverse cohort of 19 public health agencies located in three states that implemented the FPHS cost estimation methodology in their agencies during 2014-2015. The average agency incurred costs of $48 per capita implementing FPHS at their current attainment levels with a coefficient of variation (CV) of 16 percent. Achieving full FPHS implementation would require $82 per capita (CV=19 percent), indicating an estimated resource gap of $34 per capita. Substantial variation in costs exists across communities in resources currently devoted to implementing FPHS, with even larger variation in resources needed for full attainment. Reducing geographic inequities in FPHS may require novel financing mechanisms and delivery models that allow health agencies to have robust roles within the health system and realize a minimum package of public health services for the nation. © Health Research and Educational Trust.

  7. The uncertainties in estimating measurement uncertainties

    International Nuclear Information System (INIS)

    Clark, J.P.; Shull, A.H.

    1994-01-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties

  8. Uncertainty Measures of Regional Flood Frequency Estimators

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik

    1995-01-01

    Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...

  9. Gender differences in pension wealth: estimates using provider data.

    Science.gov (United States)

    Johnson, R W; Sambamoorthi, U; Crystal, S

    1999-06-01

    Information from pension providers was examined to investigate gender differences in pension wealth at midlife. For full-time wage and salary workers approaching retirement age who had pension coverage, median pension wealth on the current job was 76% greater for men than women. Differences in wages, years of job tenure, and industry between men and women accounted for most of the gender gap in pension wealth on the current job. Less than one third of the wealth difference could not be explained by gender differences in education, demographics, or job characteristics. The less-advantaged employment situation of working women currently in midlife carries over into worse retirement income prospects. However, the gender gap in pensions is likely to narrow in the future as married women's employment experiences increasingly resemble those of men.

  10. Online wave estimation using vessel motion measurements

    DEFF Research Database (Denmark)

    H. Brodtkorb, Astrid; Nielsen, Ulrik D.; J. Sørensen, Asgeir

    2018-01-01

    parameters and motion transfer functions are required as input. Apart from this the method is signal-based, with no assumptions on the wave spectrum shape, and as a result it is computationally efficient. The algorithm is implemented in a dynamic positioning (DP)control system, and tested through simulations......In this paper, a computationally efficient online sea state estimation algorithm isproposed for estimation of the on site sea state. The algorithm finds the wave spectrum estimate from motion measurements in heave, roll and pitch by iteratively solving a set of linear equations. The main vessel...

  11. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    Science.gov (United States)

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  12. Uncertainty estimation of shape and roughness measurement

    NARCIS (Netherlands)

    Morel, M.A.A.

    2006-01-01

    One of the most common techniques to measure a surface or form is mechanical probing. Although used since the early 30s of the 20th century, a method to calculate a task specific uncertainty budget was not yet devised. Guidelines and statistical estimates are common in certain cases but an

  13. 49 CFR 375.409 - May household goods brokers provide estimates?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false May household goods brokers provide estimates? 375... Estimating Charges § 375.409 May household goods brokers provide estimates? A household goods broker must not... there is a written agreement between the broker and you, the carrier, adopting the broker's estimate as...

  14. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    Rhee, B.W.; Sim, B.S.; Lim, I.C.; Oh, S.K.

    1990-01-01

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  15. Can genetic estimators provide robust estimates of the effective number of breeders in small populations?

    Directory of Open Access Journals (Sweden)

    Marion Hoehn

    Full Text Available The effective population size (N(e is proportional to the loss of genetic diversity and the rate of inbreeding, and its accurate estimation is crucial for the monitoring of small populations. Here, we integrate temporal studies of the gecko Oedura reticulata, to compare genetic and demographic estimators of N(e. Because geckos have overlapping generations, our goal was to demographically estimate N(bI, the inbreeding effective number of breeders and to calculate the N(bI/N(a ratio (N(a =number of adults for four populations. Demographically estimated N(bI ranged from 1 to 65 individuals. The mean reduction in the effective number of breeders relative to census size (N(bI/N(a was 0.1 to 1.1. We identified the variance in reproductive success as the most important variable contributing to reduction of this ratio. We used four methods to estimate the genetic based inbreeding effective number of breeders N(bI(gen and the variance effective populations size N(eV(gen estimates from the genotype data. Two of these methods - a temporal moment-based (MBT and a likelihood-based approach (TM3 require at least two samples in time, while the other two were single-sample estimators - the linkage disequilibrium method with bias correction LDNe and the program ONeSAMP. The genetic based estimates were fairly similar across methods and also similar to the demographic estimates excluding those estimates, in which upper confidence interval boundaries were uninformative. For example, LDNe and ONeSAMP estimates ranged from 14-55 and 24-48 individuals, respectively. However, temporal methods suffered from a large variation in confidence intervals and concerns about the prior information. We conclude that the single-sample estimators are an acceptable short-cut to estimate N(bI for species such as geckos and will be of great importance for the monitoring of species in fragmented landscapes.

  16. Uncertainty estimation of ultrasonic thickness measurement

    International Nuclear Information System (INIS)

    Yassir Yassen, Abdul Razak Daud; Mohammad Pauzi Ismail; Abdul Aziz Jemain

    2009-01-01

    The most important factor that should be taken into consideration when selecting ultrasonic thickness measurement technique is its reliability. Only when the uncertainty of a measurement results is known, it may be judged if the result is adequate for intended purpose. The objective of this study is to model the ultrasonic thickness measurement function, to identify the most contributing input uncertainty components, and to estimate the uncertainty of the ultrasonic thickness measurement results. We assumed that there are five error sources significantly contribute to the final error, these sources are calibration velocity, transit time, zero offset, measurement repeatability and resolution, by applying the propagation of uncertainty law to the model function, a combined uncertainty of the ultrasonic thickness measurement was obtained. In this study the modeling function of ultrasonic thickness measurement was derived. By using this model the estimation of the uncertainty of the final output result was found to be reliable. It was also found that the most contributing input uncertainty components are calibration velocity, transit time linearity and zero offset. (author)

  17. Providing low-budget estimations of carbon sequestration and greenhouse gas emissions in agricultural wetlands

    International Nuclear Information System (INIS)

    Lloyd, Colin R; Rebelo, Lisa-Maria; Max Finlayson, C

    2013-01-01

    The conversion of wetlands to agriculture through drainage and flooding, and the burning of wetland areas for agriculture have important implications for greenhouse gas (GHG) production and changing carbon stocks. However, the estimation of net GHG changes from mitigation practices in agricultural wetlands is complex compared to dryland crops. Agricultural wetlands have more complicated carbon and nitrogen cycles with both above- and below-ground processes and export of carbon via vertical and horizontal movement of water through the wetland. This letter reviews current research methodologies in estimating greenhouse gas production and provides guidance on the provision of robust estimates of carbon sequestration and greenhouse gas emissions in agricultural wetlands through the use of low cost reliable and sustainable measurement, modelling and remote sensing applications. The guidance is highly applicable to, and aimed at, wetlands such as those in the tropics and sub-tropics, where complex research infrastructure may not exist, or agricultural wetlands located in remote regions, where frequent visits by monitoring scientists prove difficult. In conclusion, the proposed measurement-modelling approach provides guidance on an affordable solution for mitigation and for investigating the consequences of wetland agricultural practice on GHG production, ecological resilience and possible changes to agricultural yields, variety choice and farming practice. (letter)

  18. Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.

    2014-11-01

    Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.

  19. Integrating field plots, lidar, and landsat time series to provide temporally consistent annual estimates of biomass from 1990 to present

    Science.gov (United States)

    Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan. Huang

    2015-01-01

    We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...

  20. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    Science.gov (United States)

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  1. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    Science.gov (United States)

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  2. Sex estimation from sternal measurements using multidetector computed tomography.

    Science.gov (United States)

    Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Bilgili, Mustafa Gokhan; Solmaz, Dilek; Erdil, Irem; Can, Ismail Ozgur

    2014-12-01

    We aimed to show the utility and reliability of sternal morphometric analysis for sex estimation.Sex estimation is a very important step in forensic identification. Skeletal surveys are main methods for sex estimation studies. Morphometric analysis of sternum may provide high accuracy rated data in sex discrimination. In this study, morphometric analysis of sternum was evaluated in 1 mm chest computed tomography scans for sex estimation. Four hundred forty 3 subjects (202 female, 241 male, mean age: 44 ± 8.1 [distribution: 30-60 year old]) were included the study. Manubrium length (ML), mesosternum length (2L), Sternebra 1 (S1W), and Sternebra 3 (S3W) width were measured and also sternal index (SI) was calculated. Differences between genders were evaluated by student t-test. Predictive factors of sex were determined by discrimination analysis and receiver operating characteristic (ROC) analysis. Male sternal measurement values are significantly higher than females (P discrimination analysis, MSL has high accuracy rate with 80.2% in females and 80.9% in males. MSL also has the best sensitivity (75.9%) and specificity (87.6%) values. Accuracy rates were above 80% in 3 stepwise discrimination analysis for both sexes. Stepwise 1 (ML, MSL, S1W, S3W) has the highest accuracy rate in stepwise discrimination analysis with 86.1% in females and 83.8% in males. Our study showed that morphometric computed tomography analysis of sternum might provide important information for sex estimation.

  3. Extrapolated HPGe efficiency estimates based on a single calibration measurement

    International Nuclear Information System (INIS)

    Winn, W.G.

    1994-01-01

    Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of

  4. Measuring physical inactivity: do current measures provide an accurate view of "sedentary" video game time?

    Science.gov (United States)

    Fullerton, Simon; Taylor, Anne W; Dal Grande, Eleonora; Berry, Narelle

    2014-01-01

    Measures of screen time are often used to assess sedentary behaviour. Participation in activity-based video games (exergames) can contribute to estimates of screen time, as current practices of measuring it do not consider the growing evidence that playing exergames can provide light to moderate levels of physical activity. This study aimed to determine what proportion of time spent playing video games was actually spent playing exergames. Data were collected via a cross-sectional telephone survey in South Australia. Participants aged 18 years and above (n = 2026) were asked about their video game habits, as well as demographic and socioeconomic factors. In cases where children were in the household, the video game habits of a randomly selected child were also questioned. Overall, 31.3% of adults and 79.9% of children spend at least some time playing video games. Of these, 24.1% of adults and 42.1% of children play exergames, with these types of games accounting for a third of all time that adults spend playing video games and nearly 20% of children's video game time. A substantial proportion of time that would usually be classified as "sedentary" may actually be spent participating in light to moderate physical activity.

  5. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  6. Climate change trade measures : estimating industry effects

    Science.gov (United States)

    2009-06-01

    Estimating the potential effects of domestic emissions pricing for industries in the United States is complex. If the United States were to regulate greenhouse gas emissions, production costs could rise for certain industries and could cause output, ...

  7. Poverty among Foster Children: Estimates Using the Supplemental Poverty Measure

    Science.gov (United States)

    Pac, Jessica; Waldfogel, Jane; Wimer, Christopher

    2017-01-01

    We use data from the Current Population Survey and the new Supplemental Poverty Measure (SPM) to provide estimates for poverty among foster children over the period 1992 to 2013. These are the first large-scale national estimates for foster children who are not included in official poverty statistics. Holding child and family demographics constant, foster children have a lower risk of poverty than other children. Analyzing income in detail suggests that foster care payments likely play an important role in reducing the risk of poverty in this group. In contrast, we find that children living with grandparents have a higher risk of poverty than other children, even after taking demographics into account. Our estimates suggest that this excess risk is likely linked to their lower likelihood of receiving foster care or other income supports. PMID:28659651

  8. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  9. Do group-specific equations provide the best estimates of stature?

    Science.gov (United States)

    Albanese, John; Osley, Stephanie E; Tuck, Andrew

    2016-04-01

    An estimate of stature can be used by a forensic anthropologist with the preliminary identification of an unknown individual when human skeletal remains are recovered. Fordisc is a computer application that can be used to estimate stature; like many other methods it requires the user to assign an unknown individual to a specific group defined by sex, race/ancestry, and century of birth before an equation is applied. The assumption is that a group-specific equation controls for group differences and should provide the best results most often. In this paper we assess the utility and benefits of using group-specific equations to estimate stature using Fordisc. Using the maximum length of the humerus and the maximum length of the femur from individuals with documented stature, we address the question: Do sex-, race/ancestry- and century-specific stature equations provide the best results when estimating stature? The data for our sample of 19th Century White males (n=28) were entered into Fordisc and stature was estimated using 22 different equation options for a total of 616 trials: 19th and 20th Century Black males, 19th and 20th Century Black females, 19th and 20th Century White females, 19th and 20th Century White males, 19th and 20th Century any, and 20th Century Hispanic males. The equations were assessed for utility in any one case (how many times the estimated range bracketed the documented stature) and in aggregate using 1-way ANOVA and other approaches. This group-specific equation that should have provided the best results was outperformed by several other equations for both the femur and humerus. These results suggest that group-specific equations do not provide better results for estimating stature while at the same time are more difficult to apply because an unknown must be allocated to a given group before stature can be estimated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Software project estimation the fundamentals for providing high quality information to decision makers

    CERN Document Server

    Abran, Alain

    2015-01-01

    Software projects are often late and over-budget and this leads to major problems for software customers. Clearly, there is a serious issue in estimating a realistic, software project budget. Furthermore, generic estimation models cannot be trusted to provide credible estimates for projects as complex as software projects. This book presents a number of examples using data collected over the years from various organizations building software. It also presents an overview of the non-for-profit organization, which collects data on software projects, the International Software Benchmarking Stan

  11. Estimation of measurement variance in the context of environment statistics

    Science.gov (United States)

    Maiti, Pulakesh

    2015-02-01

    The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.

  12. Estimation of Fuzzy Measures Using Covariance Matrices in Gaussian Mixtures

    Directory of Open Access Journals (Sweden)

    Nishchal K. Verma

    2012-01-01

    Full Text Available This paper presents a novel computational approach for estimating fuzzy measures directly from Gaussian mixtures model (GMM. The mixture components of GMM provide the membership functions for the input-output fuzzy sets. By treating consequent part as a function of fuzzy measures, we derived its coefficients from the covariance matrices found directly from GMM and the defuzzified output constructed from both the premise and consequent parts of the nonadditive fuzzy rules that takes the form of Choquet integral. The computational burden involved with the solution of λ-measure is minimized using Q-measure. The fuzzy model whose fuzzy measures were computed using covariance matrices found in GMM has been successfully applied on two benchmark problems and one real-time electric load data of Indian utility. The performance of the resulting model for many experimental studies including the above-mentioned application is found to be better and comparable to recent available fuzzy models. The main contribution of this paper is the estimation of fuzzy measures efficiently and directly from covariance matrices found in GMM, avoiding the computational burden greatly while learning them iteratively and solving polynomial equations of order of the number of input-output variables.

  13. Estimating snowpack density from Albedo measurement

    Science.gov (United States)

    James L. Smith; Howard G. Halverson

    1979-01-01

    Snow is a major source of water in Western United States. Data on snow depth and average snowpack density are used in mathematical models to predict water supply. In California, about 75 percent of the snow survey sites above 2750-meter elevation now used to collect data are in statutory wilderness areas. There is need for a method of estimating the water content of a...

  14. Nonlinear Estimation With Sparse Temporal Measurements

    Science.gov (United States)

    2016-09-01

    through an atmosphere while being monitored periodically by a single radar. Equation (2.1) details the states and param- eters used in the model...param- eter , gravitational acceleration, that is usually not considered in the literature is estimated, as shown in Equation (2.1), to increase the...the param- eters , x3 and x4, is relatively larger than that of the angle and angular velocity due to the respective units and reflects the inherent

  15. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework. PMID:28522983

  16. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  17. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Directory of Open Access Journals (Sweden)

    Ji Chul Kim

    2017-05-01

    Full Text Available Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  18. Security measures effect over performance in service provider network

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Abstract—network security is defined as a set of policies and actions taken by a ... These threats are linked with the following factors that are ... typically smaller than those in the service provider space. ... Service providers cannot manage to provide ... e the DB performance effect ... r the business needs [10].

  19. Modeling and estimation of measurement errors

    International Nuclear Information System (INIS)

    Neuilly, M.

    1998-01-01

    Any person in charge of taking measures is aware of the inaccuracy of the results however cautiously he may handle. Sensibility, accuracy, reproducibility define the significance of a result. The use of statistical methods is one of the important tools to improve the quality of measurement. The accuracy due to these methods revealed the little difference in the isotopic composition of uranium ore which led to the discovery of Oklo fossil reactor. This book is dedicated to scientists and engineers interested in measurement whatever their investigation interests are. Experimental results are presented as random variables and their laws of probability are approximated by normal law, Poison law or Pearson distribution. The impact of 1 or more parameters on the total error can be evaluated by drawing factorial plans and by using variance analysis methods. This method is also used in intercomparison procedures between laboratories and to detect any abnormal shift in a series of measurement. (A.C.)

  20. Estimating soil water evaporation using radar measurements

    Science.gov (United States)

    Sadeghi, Ali M.; Scott, H. D.; Waite, W. P.; Asrar, G.

    1988-01-01

    Field studies were conducted to evaluate the application of radar reflectivity as compared with the shortwave reflectivity (albedo) used in the Idso-Jackson equation for the estimation of daily evaporation under overcast sky and subhumid climatic conditions. Soil water content, water potential, shortwave and radar reflectivity, and soil and air temperatures were monitored during three soil drying cycles. The data from each cycle were used to calculate daily evaporation from the Idso-Jackson equation and from two other standard methods, the modified Penman and plane of zero-flux. All three methods resulted in similar estimates of evaporation under clear sky conditions; however, under overcast sky conditions, evaporation fluxes computed from the Idso-Jackson equation were consistently lower than the other two methods. The shortwave albedo values in the Idso-Jackson equation were then replaced with radar reflectivities and a new set of total daily evaporation fluxes were calculated. This resulted in a significant improvement in computed soil evaporation fluxes from the Idso-Jackson equation, and a better agreement between the three methods under overcast sky conditions.

  1. Family and Provider/Teacher Relationship Quality: Director Measure

    Science.gov (United States)

    Administration for Children & Families, 2015

    2015-01-01

    The director measure is intended for use with program directors in center-based, family child care, and Head Start/Early Head Start settings for children from birth through five years old. This measure asks respondents general questions about the early childhood education environment, the children enrolled in the program, and how the program…

  2. Measurement Model Nonlinearity in Estimation of Dynamical Systems

    Science.gov (United States)

    Majji, Manoranjan; Junkins, J. L.; Turner, J. D.

    2012-06-01

    The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.

  3. Electrical impedance spectroscopy measurements to estimate the ...

    Indian Academy of Sciences (India)

    Administrator

    The reviews of these studies were presented by Kahraman ... kind of solid or liquid material: ionic, semi-conducting, mixed electronic–ionic and .... the rock sample and its response was measured at room temperature. Figure 5 indicates the ...

  4. Estimates of economic burden of providing inpatient care in childhood rotavirus gastroenteritis from Malaysia.

    Science.gov (United States)

    Lee, Way Seah; Poo, Muhammad Izzuddin; Nagaraj, Shyamala

    2007-12-01

    To estimate the cost of an episode of inpatient care and the economic burden of hospitalisation for childhood rotavirus gastroenteritis (GE) in Malaysia. A 12-month prospective, hospital-based study on children less than 14 years of age with rotavirus GE, admitted to University of Malaya Medical Centre, Kuala Lumpur, was conducted in 2002. Data on human resource expenditure, costs of investigations, treatment and consumables were collected. Published estimates on rotavirus disease incidence in Malaysia were searched. Economic burden of hospital care for rotavirus GE in Malaysia was estimated by multiplying the cost of each episode of hospital admission for rotavirus GE with national rotavirus incidence in Malaysia. In 2002, the per capita health expenditure by Malaysian Government was US$71.47. Rotavirus was positive in 85 (22%) of the 393 patients with acute GE admitted during the study period. The median cost of providing inpatient care for an episode of rotavirus GE was US$211.91 (range US$68.50-880.60). The estimated average cases of children hospitalised for rotavirus GE in Malaysia (1999-2000) was 8571 annually. The financial burden of providing inpatient care for rotavirus GE in Malaysian children was estimated to be US$1.8 million (range US$0.6 million-7.5 million) annually. The cost of providing inpatient care for childhood rotavirus GE in Malaysia was estimated to be US$1.8 million annually. The financial burden of rotavirus disease would be higher if cost of outpatient visits, non-medical and societal costs are included.

  5. Measuring Perceptual (In) Congruence between Information Service Providers and Users

    Science.gov (United States)

    Boyce, Crystal

    2017-01-01

    Library quality is no longer evaluated solely on the value of its collections, as user perceptions of service quality play an increasingly important role in defining overall library value. This paper presents a retooling of the LibQUAL+ survey instrument, blending the gap measurement model with perceptual congruence model studies from information…

  6. Is it feasible to estimate radiosonde biases from interlaced measurements?

    Science.gov (United States)

    Kremser, Stefanie; Tradowsky, Jordis S.; Rust, Henning W.; Bodeker, Greg E.

    2018-05-01

    Upper-air measurements of essential climate variables (ECVs), such as temperature, are crucial for climate monitoring and climate change detection. Because of the internal variability of the climate system, many decades of measurements are typically required to robustly detect any trend in the climate data record. It is imperative for the records to be temporally homogeneous over many decades to confidently estimate any trend. Historically, records of upper-air measurements were primarily made for short-term weather forecasts and as such are seldom suitable for studying long-term climate change as they lack the required continuity and homogeneity. Recognizing this, the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) has been established to provide reference-quality measurements of climate variables, such as temperature, pressure, and humidity, together with well-characterized and traceable estimates of the measurement uncertainty. To ensure that GRUAN data products are suitable to detect climate change, a scientifically robust instrument replacement strategy must always be adopted whenever there is a change in instrumentation. By fully characterizing any systematic differences between the old and new measurement system a temporally homogeneous data series can be created. One strategy is to operate both the old and new instruments in tandem for some overlap period to characterize any inter-instrument biases. However, this strategy can be prohibitively expensive at measurement sites operated by national weather services or research institutes. An alternative strategy that has been proposed is to alternate between the old and new instruments, so-called interlacing, and then statistically derive the systematic biases between the two instruments. Here we investigate the feasibility of such an approach specifically for radiosondes, i.e. flying the old and new instruments on alternating days. Synthetic data sets are used to explore the

  7. Psychological impact of providing women with personalised 10-year breast cancer risk estimates.

    Science.gov (United States)

    French, David P; Southworth, Jake; Howell, Anthony; Harvie, Michelle; Stavrinos, Paula; Watterson, Donna; Sampson, Sarah; Evans, D Gareth; Donnelly, Louise S

    2018-05-08

    The Predicting Risk of Cancer at Screening (PROCAS) study estimated 10-year breast cancer risk for 53,596 women attending NHS Breast Screening Programme. The present study, nested within the PROCAS study, aimed to assess the psychological impact of receiving breast cancer risk estimates, based on: (a) the Tyrer-Cuzick (T-C) algorithm including breast density or (b) T-C including breast density plus single-nucleotide polymorphisms (SNPs), versus (c) comparison women awaiting results. A sample of 2138 women from the PROCAS study was stratified by testing groups: T-C only, T-C(+SNPs) and comparison women; and by 10-year risk estimates received: 'moderate' (5-7.99%), 'average' (2-4.99%) or 'below average' (<1.99%) risk. Postal questionnaires were returned by 765 (36%) women. Overall state anxiety and cancer worry were low, and similar for women in T-C only and T-C(+SNPs) groups. Women in both T-C only and T-C(+SNPs) groups showed lower-state anxiety but slightly higher cancer worry than comparison women awaiting results. Risk information had no consistent effects on intentions to change behaviour. Most women were satisfied with information provided. There was considerable variation in understanding. No major harms of providing women with 10-year breast cancer risk estimates were detected. Research to establish the feasibility of risk-stratified breast screening is warranted.

  8. Smile line assessment comparing quantitative measurement and visual estimation

    NARCIS (Netherlands)

    Geld, P. Van der; Oosterveld, P.; Schols, J.; Kuijpers-Jagtman, A.M.

    2011-01-01

    INTRODUCTION: Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation

  9. Individualized estimation of human core body temperature using noninvasive measurements.

    Science.gov (United States)

    Laxminarayan, Srinivas; Rakesh, Vineet; Oyama, Tatsuya; Kazman, Josh B; Yanovich, Ran; Ketko, Itay; Epstein, Yoram; Morrison, Shawnda; Reifman, Jaques

    2018-06-01

    A rising core body temperature (T c ) during strenuous physical activity is a leading indicator of heat-injury risk. Hence, a system that can estimate T c in real time and provide early warning of an impending temperature rise may enable proactive interventions to reduce the risk of heat injuries. However, real-time field assessment of T c requires impractical invasive technologies. To address this problem, we developed a mathematical model that describes the relationships between T c and noninvasive measurements of an individual's physical activity, heart rate, and skin temperature, and two environmental variables (ambient temperature and relative humidity). A Kalman filter adapts the model parameters to each individual and provides real-time personalized T c estimates. Using data from three distinct studies, comprising 166 subjects who performed treadmill and cycle ergometer tasks under different experimental conditions, we assessed model performance via the root mean squared error (RMSE). The individualized model yielded an overall average RMSE of 0.33 (SD = 0.18)°C, allowing us to reach the same conclusions in each study as those obtained using the T c measurements. Furthermore, for 22 unique subjects whose T c exceeded 38.5°C, a potential lower T c limit of clinical relevance, the average RMSE decreased to 0.25 (SD = 0.20)°C. Importantly, these results remained robust in the presence of simulated real-world operational conditions, yielding no more than 16% worse RMSEs when measurements were missing (40%) or laden with added noise. Hence, the individualized model provides a practical means to develop an early warning system for reducing heat-injury risk. NEW & NOTEWORTHY A model that uses an individual's noninvasive measurements and environmental variables can continually "learn" the individual's heat-stress response by automatically adapting the model parameters on the fly to provide real-time individualized core body temperature estimates. This

  10. Interlaboratory analytical performance studies; a way to estimate measurement uncertainty

    Directory of Open Access Journals (Sweden)

    El¿bieta £ysiak-Pastuszak

    2004-09-01

    Full Text Available Comparability of data collected within collaborative programmes became the key challenge of analytical chemistry in the 1990s, including monitoring of the marine environment. To obtain relevant and reliable data, the analytical process has to proceed under a well-established Quality Assurance (QA system with external analytical proficiency tests as an inherent component. A programme called Quality Assurance in Marine Monitoring in Europe (QUASIMEME was established in 1993 and evolved over the years as the major provider of QA proficiency tests for nutrients, trace metals and chlorinated organic compounds in marine environment studies. The article presents an evaluation of results obtained in QUASIMEME Laboratory Performance Studies by the monitoring laboratory of the Institute of Meteorology and Water Management (Gdynia, Poland in exercises on nutrient determination in seawater. The measurement uncertainty estimated from routine internal quality control measurements and from results of analytical performance exercises is also presented in the paper.

  11. Composite Measures of Health Care Provider Performance: A Description of Approaches

    Science.gov (United States)

    Shwartz, Michael; Restuccia, Joseph D; Rosen, Amy K

    2015-01-01

    Context Since the Institute of Medicine’s 2001 report Crossing the Quality Chasm, there has been a rapid proliferation of quality measures used in quality-monitoring, provider-profiling, and pay-for-performance (P4P) programs. Although individual performance measures are useful for identifying specific processes and outcomes for improvement and tracking progress, they do not easily provide an accessible overview of performance. Composite measures aggregate individual performance measures into a summary score. By reducing the amount of data that must be processed, they facilitate (1) benchmarking of an organization’s performance, encouraging quality improvement initiatives to match performance against high-performing organizations, and (2) profiling and P4P programs based on an organization’s overall performance. Methods We describe different approaches to creating composite measures, discuss their advantages and disadvantages, and provide examples of their use. Findings The major issues in creating composite measures are (1) whether to aggregate measures at the patient level through all-or-none approaches or the facility level, using one of the several possible weighting schemes; (2) when combining measures on different scales, how to rescale measures (using z scores, range percentages, ranks, or 5-star categorizations); and (3) whether to use shrinkage estimators, which increase precision by smoothing rates from smaller facilities but also decrease transparency. Conclusions Because provider rankings and rewards under P4P programs may be sensitive to both context and the data, careful analysis is warranted before deciding to implement a particular method. A better understanding of both when and where to use composite measures and the incentives created by composite measures are likely to be important areas of research as the use of composite measures grows. PMID:26626986

  12. Can administrative health utilisation data provide an accurate diabetes prevalence estimate for a geographical region?

    Science.gov (United States)

    Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin

    2018-05-01

    To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Ice thickness measurements and volume estimates for glaciers in Norway

    Science.gov (United States)

    Andreassen, Liss M.; Huss, Matthias; Melvold, Kjetil; Elvehøy, Hallgeir; Winsvold, Solveig H.

    2014-05-01

    Whereas glacier areas in many mountain regions around the world now are well surveyed using optical satellite sensors and available in digital inventories, measurements of ice thickness are sparse in comparison and a global dataset does not exist. Since the 1980s ice thickness measurements have been carried out by ground penetrating radar on many glaciers in Norway, often as part of contract work for hydropower companies with the aim to calculate hydrological divides of ice caps. Measurements have been conducted on numerous glaciers, covering the largest ice caps as well as a few smaller mountain glaciers. However, so far no ice volume estimate for Norway has been derived from these measurements. Here, we give an overview of ice thickness measurements in Norway, and use a distributed model to interpolate and extrapolate the data to provide an ice volume estimate of all glaciers in Norway. We also compare the results to various volume-area/thickness-scaling approaches using values from the literature as well as scaling constants we obtained from ice thickness measurements in Norway. Glacier outlines from a Landsat-derived inventory from 1999-2006 together with a national digital elevation model were used as input data for the ice volume calculations. The inventory covers all glaciers in mainland Norway and consists of 2534 glaciers (3143 glacier units) covering an area of 2692 km2 ± 81 km2. To calculate the ice thickness distribution of glaciers in Norway we used a distributed model which estimates surface mass balance distribution, calculates the volumetric balance flux and converts it into thickness using the flow law for ice. We calibrated this model with ice thickness data for Norway, mainly by adjusting the mass balance gradient. Model results generally agree well with the measured values, however, larger deviations were found for some glaciers. The total ice volume of Norway was estimated to be 275 km3 ± 30 km3. From the ice thickness data set we selected

  14. Effect of Smart Meter Measurements Data On Distribution State Estimation

    DEFF Research Database (Denmark)

    Pokhrel, Basanta Raj; Nainar, Karthikeyan; Bak-Jensen, Birgitte

    2018-01-01

    Smart distribution grids with renewable energy based generators and demand response resources (DRR) requires accurate state estimators for real time control. Distribution grid state estimators are normally based on accumulated smart meter measurements. However, increase of measurements in the phy......Smart distribution grids with renewable energy based generators and demand response resources (DRR) requires accurate state estimators for real time control. Distribution grid state estimators are normally based on accumulated smart meter measurements. However, increase of measurements...... in the physical grid can enforce significant stress not only on the communication infrastructure but also in the control algorithms. This paper aims to propose a methodology to analyze needed real time smart meter data from low voltage distribution grids and their applicability in distribution state estimation...

  15. Measuring Provider Performance for Physicians Participating in the Merit-Based Incentive Payment System.

    Science.gov (United States)

    Squitieri, Lee; Chung, Kevin C

    2017-07-01

    In 2017, the Centers for Medicare and Medicaid Services began requiring all eligible providers to participate in the Quality Payment Program or face financial reimbursement penalty. The Quality Payment Program outlines two paths for provider participation: the Merit-Based Incentive Payment System and Advanced Alternative Payment Models. For the first performance period beginning in January of 2017, the Centers for Medicare and Medicaid Services estimates that approximately 83 to 90 percent of eligible providers will not qualify for participation in an Advanced Alternative Payment Model and therefore must participate in the Merit-Based Incentive Payment System program. The Merit-Based Incentive Payment System path replaces existing quality-reporting programs and adds several new measures to evaluate providers using four categories of data: (1) quality, (2) cost/resource use, (3) improvement activities, and (4) advancing care information. These categories will be combined to calculate a weighted composite score for each provider or provider group. Composite Merit-Based Incentive Payment System scores based on 2017 performance data will be used to adjust reimbursed payment in 2019. In this article, the authors provide relevant background for understanding value-based provider performance measurement. The authors also discuss Merit-Based Incentive Payment System reporting requirements and scoring methodology to provide plastic surgeons with the necessary information to critically evaluate their own practice capabilities in the context of current performance metrics under the Quality Payment Program.

  16. Bad data detection in two stage estimation using phasor measurements

    Science.gov (United States)

    Tarali, Aditya

    The ability of the Phasor Measurement Unit (PMU) to directly measure the system state, has led to steady increase in the use of PMU in the past decade. However, in spite of its high accuracy and the ability to measure the states directly, they cannot completely replace the conventional measurement units due to high cost. Hence it is necessary for the modern estimators to use both conventional and phasor measurements together. This thesis presents an alternative method to incorporate the new PMU measurements into the existing state estimator in a systematic manner such that no major modification is necessary to the existing algorithm. It is also shown that if PMUs are placed appropriately, the phasor measurements can be used to detect and identify the bad data associated with critical measurements by using this model, which cannot be detected by conventional state estimation algorithm. The developed model is tested on IEEE 14, IEEE 30 and IEEE 118 bus under various conditions.

  17. Estimating the development assistance for health provided to faith-based organizations, 1990-2013.

    Science.gov (United States)

    Haakenstad, Annie; Johnson, Elizabeth; Graves, Casey; Olivier, Jill; Duff, Jean; Dieleman, Joseph L

    2015-01-01

    Faith-based organizations (FBOs) have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH) channeled through these organizations. Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs. In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs. Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.

  18. Estimating the development assistance for health provided to faith-based organizations, 1990-2013.

    Directory of Open Access Journals (Sweden)

    Annie Haakenstad

    Full Text Available Faith-based organizations (FBOs have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH channeled through these organizations.Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs.In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs.Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.

  19. Estimating the Development Assistance for Health Provided to Faith-Based Organizations, 1990–2013

    Science.gov (United States)

    Haakenstad, Annie; Johnson, Elizabeth; Graves, Casey; Olivier, Jill; Duff, Jean; Dieleman, Joseph L.

    2015-01-01

    Background Faith-based organizations (FBOs) have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH) channeled through these organizations. Material and Methods Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs. Results In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund’s contributions to NGOs. In 2011, the Gates Foundation’s contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs. Conclusion Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health. PMID:26042731

  20. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  1. Subjective Quality Measurement of Speech Its Evaluation, Estimation and Applications

    CERN Document Server

    Kondo, Kazuhiro

    2012-01-01

    It is becoming crucial to accurately estimate and monitor speech quality in various ambient environments to guarantee high quality speech communication. This practical hands-on book shows speech intelligibility measurement methods so that the readers can start measuring or estimating speech intelligibility of their own system. The book also introduces subjective and objective speech quality measures, and describes in detail speech intelligibility measurement methods. It introduces a diagnostic rhyme test which uses rhyming word-pairs, and includes: An investigation into the effect of word familiarity on speech intelligibility. Speech intelligibility measurement of localized speech in virtual 3-D acoustic space using the rhyme test. Estimation of speech intelligibility using objective measures, including the ITU standard PESQ measures, and automatic speech recognizers.

  2. Uncertainty estimation with a small number of measurements, part II: a redefinition of uncertainty and an estimator method

    Science.gov (United States)

    Huang, Hening

    2018-01-01

    This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation.

  3. Estimation of incidences of infectious diseases based on antibody measurements

    DEFF Research Database (Denmark)

    Simonsen, J; Mølbak, K; Falkenhorst, G

    2009-01-01

    bacterial infections. This study presents a Bayesian approach for obtaining incidence estimates by use of measurements of serum antibodies against Salmonella from a cross-sectional study. By comparing these measurements with antibody measurements from a follow-up study of infected individuals...

  4. Smile line assessment comparing quantitative measurement and visual estimation.

    Science.gov (United States)

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  5. Indirect Estimation of Selected Measures of Fertility and Marital ...

    African Journals Online (AJOL)

    DLHS6

    2018-01-09

    Jan 9, 2018 ... marital status distribution data of India especially of the 2011 census in deriving indirectly the fertility measures .... 2011 Census, Economic and Political weekly, EPW Vol. ... Indirect Estimates of Total Fertility Rate Using Child.

  6. Estimates of Leaf Relative Water Content from Optical Polarization Measurements

    Science.gov (United States)

    Dahlgren, R. P.; Vanderbilt, V. C.; Daughtry, C. S. T.

    2017-12-01

    Remotely sensing the water status of plant canopies remains a long term goal of remote sensing research. Existing approaches to remotely sensing canopy water status, such as the Crop Water Stress Index (CWSI) and the Equivalent Water Thickness (EWT), have limitations. The CWSI, based upon remotely sensing canopy radiant temperature in the thermal infrared spectral region, does not work well in humid regions, requires estimates of the vapor pressure deficit near the canopy during the remote sensing over-flight and, once stomata close, provides little information regarding the canopy water status. The EWT is based upon the physics of water-light interaction in the 900-2000nm spectral region, not plant physiology. Our goal, development of a remote sensing technique for estimating plant water status based upon measurements in the VIS/NIR spectral region, would potentially provide remote sensing access to plant dehydration physiology - to the cellular photochemistry and structural changes associated with water deficits in leaves. In this research, we used optical, crossed polarization filters to measure the VIS/NIR light reflected from the leaf interior, R, as well as the leaf transmittance, T, for 78 corn (Zea mays) and soybean (Glycine max) leaves having relative water contents (RWC) between 0.60 and 0.98. Our results show that as RWC decreases R increases while T decreases. Our results tie R and T changes in the VIS/NIR to leaf physiological changes - linking the light scattered out of the drying leaf interior to its relative water content and to changes in leaf cellular structure and pigments. Our results suggest remotely sensing the physiological water status of a single leaf - and perhaps of a plant canopy - might be possible in the future.

  7. Physical Activity in Vietnam: Estimates and Measurement Issues.

    Science.gov (United States)

    Bui, Tan Van; Blizzard, Christopher Leigh; Luong, Khue Ngoc; Truong, Ngoc Le Van; Tran, Bao Quoc; Otahal, Petr; Srikanth, Velandai; Nelson, Mark Raymond; Au, Thuy Bich; Ha, Son Thai; Phung, Hai Ngoc; Tran, Mai Hoang; Callisaya, Michele; Gall, Seana

    2015-01-01

    Our aims were to provide the first national estimates of physical activity (PA) for Vietnam, and to investigate issues affecting their accuracy. Measurements were made using the Global Physical Activity Questionnaire (GPAQ) on a nationally-representative sample of 14706 participants (46.5% males, response 64.1%) aged 25-64 years selected by multi-stage stratified cluster sampling. Approximately 20% of Vietnamese people had no measureable PA during a typical week, but 72.9% (men) and 69.1% (women) met WHO recommendations for PA by adults for their age. On average, 52.0 (men) and 28.0 (women) Metabolic Equivalent Task (MET)-hours/week (largely from work activities) were reported. Work and total PA were higher in rural areas and varied by season. Less than 2% of respondents provided incomplete information, but an additional one-in-six provided unrealistically high values of PA. Those responsible for reporting errors included persons from rural areas and all those with unstable work patterns. Box-Cox transformation (with an appropriate constant added) was the most successful method of reducing the influence of large values, but energy-scaled values were most strongly associated with pathophysiological outcomes. Around seven-in-ten Vietnamese people aged 25-64 years met WHO recommendations for total PA, which was mainly from work activities and higher in rural areas. Nearly all respondents were able to report their activity using the GPAQ, but with some exaggerated values and seasonal variation in reporting. Data transformation provided plausible summary values, but energy-scaling fared best in association analyses.

  8. Physical Activity in Vietnam: Estimates and Measurement Issues.

    Directory of Open Access Journals (Sweden)

    Tan Van Bui

    Full Text Available Our aims were to provide the first national estimates of physical activity (PA for Vietnam, and to investigate issues affecting their accuracy.Measurements were made using the Global Physical Activity Questionnaire (GPAQ on a nationally-representative sample of 14706 participants (46.5% males, response 64.1% aged 25-64 years selected by multi-stage stratified cluster sampling.Approximately 20% of Vietnamese people had no measureable PA during a typical week, but 72.9% (men and 69.1% (women met WHO recommendations for PA by adults for their age. On average, 52.0 (men and 28.0 (women Metabolic Equivalent Task (MET-hours/week (largely from work activities were reported. Work and total PA were higher in rural areas and varied by season. Less than 2% of respondents provided incomplete information, but an additional one-in-six provided unrealistically high values of PA. Those responsible for reporting errors included persons from rural areas and all those with unstable work patterns. Box-Cox transformation (with an appropriate constant added was the most successful method of reducing the influence of large values, but energy-scaled values were most strongly associated with pathophysiological outcomes.Around seven-in-ten Vietnamese people aged 25-64 years met WHO recommendations for total PA, which was mainly from work activities and higher in rural areas. Nearly all respondents were able to report their activity using the GPAQ, but with some exaggerated values and seasonal variation in reporting. Data transformation provided plausible summary values, but energy-scaling fared best in association analyses.

  9. Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases.

    Science.gov (United States)

    Pezzè, Luca; Ciampini, Mario A; Spagnolo, Nicolò; Humphreys, Peter C; Datta, Animesh; Walmsley, Ian A; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto

    2017-09-29

    A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.

  10. Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases

    Science.gov (United States)

    Pezzè, Luca; Ciampini, Mario A.; Spagnolo, Nicolò; Humphreys, Peter C.; Datta, Animesh; Walmsley, Ian A.; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto

    2017-09-01

    A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.

  11. The provider perception inventory: psychometrics of a scale designed to measure provider stigma about HIV, substance abuse, and MSM behavior.

    Science.gov (United States)

    Windsor, Liliane C; Benoit, Ellen; Ream, Geoffrey L; Forenza, Brad

    2013-01-01

    Nongay identified men who have sex with men and women (NGI MSMW) and who use alcohol and other drugs are a vulnerable, understudied, and undertreated population. Little is known about the stigma faced by this population or about the way that health service providers view and serve these stigmatized clients. The provider perception inventory (PPI) is a 39-item scale that measures health services providers' stigma about HIV/AIDS, substance use, and MSM behavior. The PPI is unique in that it was developed to include service provider stigma targeted at NGI MSMW individuals. PPI was developed through a mixed methods approach. Items were developed based on existing measures and findings from focus groups with 18 HIV and substance abuse treatment providers. Exploratory factor analysis using data from 212 health service providers yielded a two dimensional scale: (1) individual attitudes (19 items) and (2) agency environment (11 items). Structural equation modeling analysis supported the scale's predictive validity (N=190 sufficiently complete cases). Overall findings indicate initial support for the psychometrics of the PPI as a measure of service provider stigma pertaining to the intersection of HIV/AIDS, substance use, and MSM behavior. Limitations and implications to future research are discussed.

  12. The estimation of the measurement results with using statistical methods

    International Nuclear Information System (INIS)

    Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T

    2015-01-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed

  13. The estimation of the measurement results with using statistical methods

    Science.gov (United States)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  14. A New Heteroskedastic Consistent Covariance Matrix Estimator using Deviance Measure

    Directory of Open Access Journals (Sweden)

    Nuzhat Aftab

    2016-06-01

    Full Text Available In this article we propose a new heteroskedastic consistent covariance matrix estimator, HC6, based on deviance measure. We have studied and compared the finite sample behavior of the new test and compared it with other this kind of estimators, HC1, HC3 and HC4m, which are used in case of leverage observations. Simulation study is conducted to study the effect of various levels of heteroskedasticity on the size and power of quasi-t test with HC estimators. Results show that the test statistic based on our new suggested estimator has better asymptotic approximation and less size distortion as compared to other estimators for small sample sizes when high level ofheteroskedasticity is present in data.

  15. Stature estimation using the knee height measurement amongst Brazilian elderly

    OpenAIRE

    Siqueira Fogal, Aline; Franceschini, Sylvia do Carmo Castro; Eloiza Priore, Silvia; Cotta, Rosângela Minardi M.; Queiroz Ribeiro, Andreia

    2015-01-01

    Introduction: Stature is an important variable in several indices of nutritional status that are applicable to elderly persons. However, stature is difficult or impossible to measure in elderly because they are often unable to maintain the standing position. A alternative is the use of estimated height from measurements of knee height measure. Aims: This study aimed to evaluate the accuracy of the formula proposed by Chumlea et al. (1985) based on the knee of a Caucasian population to estimat...

  16. Fixed-flexion radiography of the knee provides reproducible joint space width measurements in osteoarthritis

    International Nuclear Information System (INIS)

    Kothari, Manish; Sieffert, Martine; Block, Jon E.; Peterfy, Charles G.; Guermazi, Ali; Ingersleben, Gabriele von; Miaux, Yves; Stevens, Randall

    2004-01-01

    The validity of a non-fluoroscopic fixed-flexion radiographic acquisition and analysis protocol for measurement of joint space width (JSW) in knee osteoarthritis is determined. A cross-sectional study of 165 patients with documented knee osteoarthritis participating in a multicenter, prospective study of chondroprotective agents was performed. All patients had posteroanterior, weight-bearing, fixed-flexion radiography with 10 caudal beam angulation. A specially designed frame (SynaFlexer) was used to standardize the positioning. Minimum medial and lateral JSW were measured manually and twice by an automated analysis system to determine inter-technique and intra-reader concordance and reliability. A random subsample of 30 patients had repeat knee radiographs 2 weeks apart to estimate short-term reproducibility using automated analysis. Concordance between manual and automated medial JSW measurements was high (ICC=0.90); lateral compartment measurements showed somewhat less concordance (ICC=0.72). There was excellent concordance between repeated automated JSW measurements performed 6 months apart for the medial (ICC=0.94) and lateral (ICC=0.86) compartments. Short-term reproducibility for the subsample of 30 cases with repeat acquisitions demonstrated an average SD of 0.14 mm for medial JSW (CV=4.3%) and 0.23 mm for lateral JSW (CV=4.0%). Fixed-flexion radiography of the knee using a positioning device provides consistent, reliable and reproducible measurement of minimum JSW in knee osteoarthritis without the need for concurrent fluoroscopic guidance. (orig.)

  17. Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters

    Science.gov (United States)

    Shi, L.

    2015-12-01

    This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  18. West African donkey's liveweight estimation using body measurements

    Directory of Open Access Journals (Sweden)

    Pierre Claver Nininahazwe

    2017-10-01

    Full Text Available Aim: The objective of this study was to determine a formula for estimating the liveweight in West African donkeys. Materials and Methods: Liveweight and a total of 6 body measurements were carried out on 1352 donkeys from Burkina Faso, Mali, Niger, and Senegal. The correlations between liveweight and body measurements were determined, and the most correlated body measurements with liveweight were used to establish regression lines. Results: The average weight of a West African donkey was 126.0±17.1 kg, with an average height at the withers of 99.5±3.67 cm; its body length was 104.4±6.53 cm, and a heart girth (HG of 104.4±6.53 cm. After analyzing the various regression lines and correlations, it was found that the HG could better estimate the liveweight of West African donkeys by simple linear regression method. Indeed, the liveweight (LW showed a better correlation with the HG (R2=0.81. The following formulas (Equations 1 and 2 could be used to estimate the LW of West Africa donkeys. Equation 1: Estimated LW (kg = 2.55 x HG (cm - 153.49; Equation 2: Estimated LW (kg = Heart girth (cm2.68 / 2312.44. Conclusion: The above formulas could be used to manufacture weighing tape to be utilized by veterinary clinicians and farmers to estimate donkey's weight in the view of medication and adjustment of load.

  19. Study on Posture Estimation Using Delayed Measurements for Mobile Robots

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    When associating data from various sensors to estimate the posture of mobile robots, a crucial problem to be solved is that there may be some delayed measurements. Furthermore, the general multi-sensor data fusion algorithm is a Kalman filter. In order to handle the problem concerning delayed measurements, this paper investigates a Kalman filter modified to account for the delays. Based on the interpolating measurement, a fusion system is applied to estimate the posture of a mobile robot which fuses the data from the encoder and laser global position system using the extended Kalman filter algorithm. Finally, the posture estimation experiment of the mobile robot is given whose result verifies the feasibility and efficiency of the algorithm.

  20. Estimation of stature using lower limb measurements in Sudanese Arabs.

    Science.gov (United States)

    Ahmed, Altayeb Abdalla

    2013-07-01

    The estimation of stature from body parts is one of the most vital parts of personal identification in medico-legal autopsies, especially when mutilated and amputated limbs or body parts are found. The aim of this study was to assess the reliability and accuracy of using lower limb measurements for stature estimations. The stature, tibial length, bimalleolar breadth, foot length and foot breadth of 160 right-handed Sudanese Arab subjects, 80 men and 80 women (25-30 years old), were measured. The reliability of measurement acquisition was tested prior to the primary data collection. The data were analysed using basic univariate analysis and linear and multiple regression analyses. The results showed acceptable standards of measurement errors and reliability. Sex differences were significant for all of the measurements. There was a positive correlation coefficient between lower-limb dimensions and stature (P-value Arabs. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  1. Triangular and Trapezoidal Fuzzy State Estimation with Uncertainty on Measurements

    Directory of Open Access Journals (Sweden)

    Mohammad Sadeghi Sarcheshmah

    2012-01-01

    Full Text Available In this paper, a new method for uncertainty analysis in fuzzy state estimation is proposed. The uncertainty is expressed in measurements. Uncertainties in measurements are modelled with different fuzzy membership functions (triangular and trapezoidal. To find the fuzzy distribution of any state variable, the problem is formulated as a constrained linear programming (LP optimization. The viability of the proposed method would be verified with the ones obtained from the weighted least squares (WLS and the fuzzy state estimation (FSE in the 6-bus system and in the IEEE-14 and 30 bus system.

  2. Estimation of the contribution of private providers in tuberculosis case notification and treatment outcome in Pakistan.

    Science.gov (United States)

    Chughtai, A A; Qadeer, E; Khan, W; Hadi, H; Memon, I A

    2013-03-01

    To improve involvement of the private sector in the national tuberculosis (TB) programme in Pakistan various public-private mix projects were set up between 2004 and 2009. A retrospective analysis of data was made to study 6 different public-private mix models for TB control in Pakistan and estimate the contribution of the various private providers to TB case notification and treatment outcome. The number of TB cases notified through the private sector increased significantly from 77 cases in 2004 to 37,656 in 2009. Among the models, the nongovernmental organization model made the greatest contribution to case notification (58.3%), followed by the hospital-based model (18.9%). Treatment success was highest for the district-led model (94.1%) and lowest for the hospital-based model (74.2%). The private sector made an important contribution to the national data through the various public-private mix projects. Issues of sustainability and the lack of treatment supporters are discussed as reasons for lack of success of some projects.

  3. Global Precipitation Measurement (GPM) Core Observatory Falling Snow Estimates

    Science.gov (United States)

    Skofronick Jackson, G.; Kulie, M.; Milani, L.; Munchak, S. J.; Wood, N.; Levizzani, V.

    2017-12-01

    Retrievals of falling snow from space represent an important data set for understanding and linking the Earth's atmospheric, hydrological, and energy cycles. Estimates of falling snow must be captured to obtain the true global precipitation water cycle, snowfall accumulations are required for hydrological studies, and without knowledge of the frozen particles in clouds one cannot adequately understand the energy and radiation budgets. This work focuses on comparing the first stable falling snow retrieval products (released May 2017) for the Global Precipitation Measurement (GPM) Core Observatory (GPM-CO), which was launched February 2014, and carries both an active dual frequency (Ku- and Ka-band) precipitation radar (DPR) and a passive microwave radiometer (GPM Microwave Imager-GMI). Five separate GPM-CO falling snow retrieval algorithm products are analyzed including those from DPR Matched (Ka+Ku) Scan, DPR Normal Scan (Ku), DPR High Sensitivity Scan (Ka), combined DPR+GMI, and GMI. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new, the different on-orbit instruments don't capture all snow rates equally, and retrieval algorithms differ. Thus a detailed comparison among the GPM-CO products elucidates advantages and disadvantages of the retrievals. GPM and CloudSat global snowfall evaluation exercises are natural investigative pathways to explore, but caution must be undertaken when analyzing these datasets for comparative purposes. This work includes outlining the challenges associated with comparing GPM-CO to CloudSat satellite snow estimates due to the different sampling, algorithms, and instrument capabilities. We will highlight some factors and assumptions that can be altered or statistically normalized and applied in an effort to make comparisons between GPM and CloudSat global satellite falling snow products as equitable as possible.

  4. The estimation of differential counting measurements of possitive quantities with relatively large statistical errors

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1982-01-01

    Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)

  5. Estimation of the measurement error of eccentrically installed orifice plates

    Energy Technology Data Exchange (ETDEWEB)

    Barton, Neil; Hodgkinson, Edwin; Reader-Harris, Michael

    2005-07-01

    The presentation discusses methods for simulation and estimation of flow measurement errors. The main conclusions are: Computational Fluid Dynamics (CFD) simulation methods and published test measurements have been used to estimate the error of a metering system over a period when its orifice plates were eccentric and when leaking O-rings allowed some gas to bypass the meter. It was found that plate eccentricity effects would result in errors of between -2% and -3% for individual meters. Validation against test data suggests that these estimates of error should be within 1% of the actual error, but it is unclear whether the simulations over-estimate or under-estimate the error. Simulations were also run to assess how leakage at the periphery affects the metering error. Various alternative leakage scenarios were modelled and it was found that the leakage rate has an effect on the error, but that the leakage distribution does not. Correction factors, based on the CFD results, were then used to predict the system's mis-measurement over a three-year period (tk)

  6. Load estimation from planar PIV measurement in vortex dominated flows

    Science.gov (United States)

    McClure, Jeffrey; Yarusevych, Serhiy

    2017-11-01

    Control volume-based loading estimates are employed on experimental and synthetic numerical planar Particle Image Velocimetry (PIV) data of a stationary cylinder and a cylinder undergoing one degree-of-freedom (1DOF) Vortex Induced Vibration (VIV). The results reveal the necessity of including out of plane terms, identified from a general formulation of the control volume momentum balance, when evaluating loads from planar measurements in three-dimensional flows. Reynolds stresses from out of plane fluctuations are shown to be significant for both instantaneous and mean force estimates when the control volume encompasses vortex dominated regions. For planar measurement, invoking a divergence-free assumption allows accurate estimation of half the identified terms. Towards evaluating the fidelity of PIV-based loading estimates for obtaining the forcing function unobtrusively in VIV experiments, the accuracy of the control volume-based loading methodology is evaluated using the numerical data with synthetically generated experimental PIV error, and a comparison is made between experimental PIV-based estimates and simultaneous force balance measurements.

  7. Ocean subsurface particulate backscatter estimation from CALIPSO spaceborne lidar measurements

    Science.gov (United States)

    Chen, Peng; Pan, Delu; Wang, Tianyu; Mao, Zhihua

    2017-10-01

    A method for ocean subsurface particulate backscatter estimation from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite was demonstrated. The effects of the CALIOP receiver's transient response on the attenuated backscatter profile were first removed. The two-way transmittance of the overlying atmosphere was then estimated as the ratio of the measured ocean surface attenuated backscatter to the theoretical value computed from wind driven wave slope variance. Finally, particulate backscatter was estimated from the depolarization ratio as the ratio of the column-integrated cross-polarized and co-polarized channels. Statistical results show that the derived particulate backscatter by the method based on CALIOP data agree reasonably well with chlorophyll-a concentration using MODIS data. It indicates a potential use of space-borne lidar to estimate global primary productivity and particulate carbon stock.

  8. Estimation of waves and ship responses using onboard measurements

    DEFF Research Database (Denmark)

    Montazeri, Najmeh

    This thesis focuses on estimation of waves and ship responses using ship-board measurements. This is useful for development of operational safety and performance efficiency in connection with the broader concept of onboard decision support systems. Estimation of sea state is studied using a set...... of measured ship responses, a parametric description of directional wave spectra (a generalised JONSWAP model) and the transfer functions of the ship responses. The difference between the spectral moments of the measured ship responses and the corresponding theoretically calculated moments formulates a cost...... information. The model is tested on simulated data based on known unimodal and bimodal wave scenarios. The wave parameters in the output are then compared with the true wave parameters. In addition to the numerical experiments, two sets of full-scale measurements from container ships are analysed. Herein...

  9. Psychometrics of an original measure of barriers to providing family planning information: Implications for social service providers.

    Science.gov (United States)

    Bell, Melissa M; Newhill, Christina E

    2017-07-01

    Social service professionals can face challenges in the course of providing family planning information to their clients. This article reports findings from a study that developed an original 27-item measure, the Reproductive Counseling Obstacle Scale (RCOS) designed to measure such obstacles based conceptually on Bandura's social cognitive theory (1986). We examine the reliability and factor structure of the RCOS using a sample of licensed social workers (N = 197). A 20-item revised version of the RCOS was derived using principal component factor analysis. Results indicate that barriers to discussing family planning, as measured by the RCOS, appear to be best represented by a two-factor solution, reflecting self-efficacy/interest and perceived professional obligation/moral concerns. Implications for practice and future research are discussed.

  10. Center of mass movement estimation using an ambulatory measurement sytem

    NARCIS (Netherlands)

    Schepers, H. Martin; Veltink, Petrus H.

    2007-01-01

    Center of Mass (CoM) displacement, an important variable to characterize human walking, was estimated in this study using an ambulatory measurement system. The ambulatory system was compared to an optical reference system. Root-mean-square differences between the magnitudes of the CoM appeared to be

  11. Measuring, calculating and estimating PEP's parasitic mode loss parameters

    International Nuclear Information System (INIS)

    Weaver, J.N.

    1981-01-01

    This note discusses various ways the parasitic mode losses from a bunched beam to a vacuum chamber can be measured, calculated or estimated. A listing of the parameter, k, for the various PEP ring components is included. A number of formulas for calculating multiple and single pass losses are discussed and evaluated for several cases. 25 refs., 1 fig., 1 tab

  12. Methane Emission Estimates from Landfills Obtained with Dynamic Plume Measurements

    International Nuclear Information System (INIS)

    Hensen, A.; Scharff, H.

    2001-01-01

    Methane emissions from 3 different landfills in the Netherlands were estimated using a mobile Tuneable Diode Laser system (TDL). The methane concentration in the cross section of the plume is measured downwind of the source on a transect perpendicular to the wind direction. A gaussian plume model was used to simulate the concentration levels at the transect. The emission from the source is calculated from the measured and modelled concentration levels.Calibration of the plume dispersion model is done using a tracer (N 2 O) that is released from the landfill and measured simultaneously with the TDL system. The emission estimates for the different locations ranged from 3.6 to 16 m 3 ha -1 hr -1 for the different sites. The emission levels were compared to emission estimates based on the landfill gas production models. This comparison suggests oxidation rates that are up to 50% in spring and negligible in November. At one of the three sites measurements were performed in campaigns in 3 consecutive years. Comparison of the emission levels in the first and second year showed a reduction of the methane emission of about 50% due to implementation of a gas extraction system. From the second to the third year emissions increased by a factor of 4 due to new land filling. Furthermore measurements were performed in winter when oxidation efficiency was reduced. This paper describes the measurement technique used, and discusses the results of the experimental sessions that were performed

  13. Demonstrating Heisenberg-limited unambiguous phase estimation without adaptive measurements

    International Nuclear Information System (INIS)

    Higgins, B L; Wiseman, H M; Pryde, G J; Berry, D W; Bartlett, S D; Mitchell, M W

    2009-01-01

    We derive, and experimentally demonstrate, an interferometric scheme for unambiguous phase estimation with precision scaling at the Heisenberg limit that does not require adaptive measurements. That is, with no prior knowledge of the phase, we can obtain an estimate of the phase with a standard deviation that is only a small constant factor larger than the minimum physically allowed value. Our scheme resolves the phase ambiguity that exists when multiple passes through a phase shift, or NOON states, are used to obtain improved phase resolution. Like a recently introduced adaptive technique (Higgins et al 2007 Nature 450 393), our experiment uses multiple applications of the phase shift on single photons. By not requiring adaptive measurements, but rather using a predetermined measurement sequence, the present scheme is both conceptually simpler and significantly easier to implement. Additionally, we demonstrate a simplified adaptive scheme that also surpasses the standard quantum limit for single passes.

  14. Estimating Wet Bulb Globe Temperature Using Standard Meteorological Measurements

    International Nuclear Information System (INIS)

    Hunter, C.H.

    1999-01-01

    The heat stress management program at the Department of Energy''s Savannah River Site (SRS) requires implementation of protective controls on outdoor work based on observed values of wet bulb globe temperature (WBGT). To ensure continued compliance with heat stress program requirements, a computer algorithm was developed which calculates an estimate of WBGT using standard meteorological measurements. In addition, scripts were developed to generate a calculation every 15 minutes and post the results to an Intranet web site

  15. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Measurement and Estimation of Riverbed Scour in a Mountain River

    Science.gov (United States)

    Song, L. A.; Chan, H. C.; Chen, B. A.

    2016-12-01

    Mountains are steep with rapid flows in Taiwan. After installing a structure in a mountain river, scour usually occurs around the structure because of the high energy gradient. Excessive scouring has been reported as one of the main causes of failure of river structures. The scouring disaster related to the flood can be reduced if the riverbed variation can be properly evaluated based on the flow conditions. This study measures the riverbed scour by using an improved "float-out device". Scouring and hydrodynamic data were simultaneously collected in the Mei River, Nantou County located in central Taiwan. The semi-empirical models proposed by previous researchers were used to estimate the scour depths based on the measured flow characteristics. The differences between the measured and estimated scour depths were discussed. Attempts were then made to improve the estimating results by developing a semi-empirical model to predict the riverbed scour based on the local field data. It is expected to setup a warning system of river structure safety by using the flow conditions. Keywords: scour, model, float-out device

  17. Estimation of atomic interaction parameters by quantum measurements

    DEFF Research Database (Denmark)

    Kiilerich, Alexander Holm; Mølmer, Klaus

    Quantum systems, ranging from atomic systems to field modes and mechanical devices are useful precision probes for a variety of physical properties and phenomena. Measurements by which we extract information about the evolution of single quantum systems yield random results and cause a back actio...... strategies, we address the Fisher information and the Cramér-Rao sensitivity bound. We investigate monitoring by photon counting, homodyne detection and frequent projective measurements respectively, and exemplify by Rabi frequency estimation in a driven two-level system....

  18. Estimation of piping temperature fluctuations based on external strain measurements

    International Nuclear Information System (INIS)

    Morilhat, P.; Maye, J.P.

    1993-01-01

    Due to the difficulty to carry out measurements at the inner sides of nuclear reactor piping subjected to thermal transients, temperature and stress variations in the pipe walls are estimated by means of external thermocouples and strain-gauges. This inverse problem is solved by spectral analysis. Since the wall harmonic transfer function (response to a harmonic load) is known, the inner side signal will be obtained by convolution of the inverse transfer function of the system and of the strain measurement enables detection of internal temperature fluctuations in a frequency range beyond the scope of the thermocouples. (authors). 5 figs., 3 refs

  19. Estimating the measurement uncertainty in forensic blood alcohol analysis.

    Science.gov (United States)

    Gullberg, Rod G

    2012-04-01

    For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.

  20. Digital photography provides a fast, reliable, and noninvasive method to estimate anthocyanin pigment concentration in reproductive and vegetative plant tissues.

    Science.gov (United States)

    Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo

    2018-03-01

    Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the

  1. Discharge estimation combining flow routing and occasional measurements of velocity

    Directory of Open Access Journals (Sweden)

    G. Corato

    2011-09-01

    Full Text Available A new procedure is proposed for estimating river discharge hydrographs during flood events, using only water level data at a single gauged site, as well as 1-D shallow water modelling and occasional maximum surface flow velocity measurements. One-dimensional diffusive hydraulic model is used for routing the recorded stage hydrograph in the channel reach considering zero-diffusion downstream boundary condition. Based on synthetic tests concerning a broad prismatic channel, the "suitable" reach length is chosen in order to minimize the effect of the approximated downstream boundary condition on the estimation of the upstream discharge hydrograph. The Manning's roughness coefficient is calibrated by using occasional instantaneous surface velocity measurements during the rising limb of flood that are used to estimate instantaneous discharges by adopting, in the flow area, a two-dimensional velocity distribution model. Several historical events recorded in three gauged sites along the upper Tiber River, wherein reliable rating curves are available, have been used for the validation. The outcomes of the analysis can be summarized as follows: (1 the criterion adopted for selecting the "suitable" channel length based on synthetic test studies has proved to be reliable for field applications to three gauged sites. Indeed, for each event a downstream reach length not more than 500 m is found to be sufficient, for a good performances of the hydraulic model, thereby enabling the drastic reduction of river cross-sections data; (2 the procedure for Manning's roughness coefficient calibration allowed for high performance in discharge estimation just considering the observed water levels and occasional measurements of maximum surface flow velocity during the rising limb of flood. Indeed, errors in the peak discharge magnitude, for the optimal calibration, were found not exceeding 5% for all events observed in the three investigated gauged sections, while the

  2. Measuring Physical Inactivity: Do Current Measures Provide an Accurate View of “Sedentary” Video Game Time?

    Directory of Open Access Journals (Sweden)

    Simon Fullerton

    2014-01-01

    Full Text Available Background. Measures of screen time are often used to assess sedentary behaviour. Participation in activity-based video games (exergames can contribute to estimates of screen time, as current practices of measuring it do not consider the growing evidence that playing exergames can provide light to moderate levels of physical activity. This study aimed to determine what proportion of time spent playing video games was actually spent playing exergames. Methods. Data were collected via a cross-sectional telephone survey in South Australia. Participants aged 18 years and above (n=2026 were asked about their video game habits, as well as demographic and socioeconomic factors. In cases where children were in the household, the video game habits of a randomly selected child were also questioned. Results. Overall, 31.3% of adults and 79.9% of children spend at least some time playing video games. Of these, 24.1% of adults and 42.1% of children play exergames, with these types of games accounting for a third of all time that adults spend playing video games and nearly 20% of children’s video game time. Conclusions. A substantial proportion of time that would usually be classified as “sedentary” may actually be spent participating in light to moderate physical activity.

  3. Greenhouse gases regional fluxes estimated from atmospheric measurements

    International Nuclear Information System (INIS)

    Messager, C.

    2007-07-01

    build up a new system to measure continuously CO 2 (or CO), CH 4 , N 2 O and SF 6 mixing ratios. It is based on a commercial gas chromatograph (Agilent 6890N) which have been modified to reach better precision. Reproducibility computed with a target gas on a 24 hours time step gives: 0.06 ppm for CO 2 , 1.4 ppb for CO, 0.7 ppb for CH 4 , 0.2 ppb for N 2 O and 0.05 ppt for SF 6 . The instrument's run is fully automated, an air sample analysis takes about 5 minutes. In July 2006, I install instrumentation on a telecommunication tall tower (200 m) situated near Orleans forest in Trainou, to monitor continuously greenhouse gases (CO 2 , CH 4 , N 2 O, SF 6 ), atmospheric tracers (CO, Radon-222) and meteorological parameters. Intake lines were installed at 3 levels (50, 100 and 180 m) and allow us to sample air masses along the vertical. Continuous measurement started in January 2007. I used Mace Head (Ireland) and Gif-sur-Yvette continuous measurements to estimate major greenhouse gases emission fluxes at regional scale. To make the link between atmospheric measurements and surface fluxes, we need to quantify dilution due to atmospheric transport. I used Radon-222 as tracer (radon tracer method) and planetary boundary layer heights estimates from ECMWF model (boundary layer budget method) to parameterize atmospheric transport. In both cases I compared results to available emission inventories. (author)

  4. Influence of measurement errors and estimated parameters on combustion diagnosis

    International Nuclear Information System (INIS)

    Payri, F.; Molina, S.; Martin, J.; Armas, O.

    2006-01-01

    Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors

  5. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    Energy Technology Data Exchange (ETDEWEB)

    Bruschewski, Martin; Schiffer, Heinz-Peter [Technische Universitaet Darmstadt, Institute of Gas Turbines and Aerospace Propulsion, Darmstadt (Germany); Freudenhammer, Daniel [Technische Universitaet Darmstadt, Institute of Fluid Mechanics and Aerodynamics, Center of Smart Interfaces, Darmstadt (Germany); Buchenberg, Waltraud B. [University Medical Center Freiburg, Medical Physics, Department of Radiology, Freiburg (Germany); Grundmann, Sven [University of Rostock, Institute of Fluid Mechanics, Rostock (Germany)

    2016-05-15

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75% is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented. (orig.)

  6. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    Science.gov (United States)

    Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven

    2016-05-01

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.

  7. Real-time measurements and their effects on state estimation of distribution power system

    DEFF Research Database (Denmark)

    Han, Xue; You, Shi; Thordarson, Fannar

    2013-01-01

    between the estimated values (voltage and injected power) and the measurements are applied to evaluate the accuracy of the estimated grid states. Eventually, some suggestions are provided for the distribution grid operators on placing the real-time meters in the distribution grid.......This paper aims at analyzing the potential value of using different real-time metering and measuring instruments applied in the low voltage distribution networks for state-estimation. An algorithm is presented to evaluate different combinations of metering data using a tailored state estimator....... It is followed by a case study based on the proposed algorithm. A real distribution grid feeder with different types of meters installed either in the cabinets or at the customer side is selected for simulation and analysis. Standard load templates are used to initiate the state estimation. The deviations...

  8. Resting Energy Expenditure in Anorexia Nervosa: Measured versus Estimated

    Directory of Open Access Journals (Sweden)

    Marwan El Ghoch

    2012-01-01

    Full Text Available Introduction. Aim of this study was to compare the resting energy expenditure (REE measured by the Douglas bag method with the REE estimated with the FitMate method, the Harris-Benedict equation, and the Müller et al. equation for individuals with BMI < 18.5 kg/m2 in a severe group of underweight patients with anorexia nervosa (AN. Methods. 15 subjects with AN participated in the study. The Douglas bag method and the FitMate method were used to measure REE and the dual energy X-ray absorptiometry to assess body composition after one day of refeeding. Results. FitMate method and the Müller et al. equation gave an accurate REE estimation, while the Harris-Benedict equation overestimated the REE when compared with the Douglas bag method. Conclusion. The data support the use of the FitMate method and the Müller et al. equation, but not the Harris-Benedict equation, to estimate REE in AN patients after short-term refeeding.

  9. A torque-measuring micromotor provides operator independent measurements marking four different density areas in maxillae.

    Science.gov (United States)

    Di Stefano, Danilo Alessio; Arosio, Paolo; Piattelli, Adriano; Perrotti, Vittoria; Iezzi, Giovanna

    2015-02-01

    Bone density at implant placement site is a key factor to obtain the primary stability of the fixture, which, in turn, is a prognostic factor for osseointegration and long-term success of an implant supported rehabilitation. Recently, an implant motor with a bone density measurement probe has been introduced. The aim of the present study was to test the objectiveness of the bone densities registered by the implant motor regardless of the operator performing them. A total of 3704 bone density measurements, performed by means of the implant motor, were registered by 39 operators at different implant sites during routine activity. Bone density measurements were grouped according to their distribution across the jaws. Specifically, four different areas were distinguished: a pre-antral (between teeth from first right maxillary premolar to first left maxillary premolar) and a sub-antral (more distally) zone in the maxilla, and an interforaminal (between and including teeth from first left mandibular premolar to first right mandibular premolar) and a retroforaminal (more distally) zone in the lower one. A statistical comparison was performed to check the inter-operators variability of the collected data. The device produced consistent and operator-independent bone density values at each tooth position, showing a reliable bone-density measurement. The implant motor demonstrated to be a helpful tool to properly plan implant placement and loading irrespective of the operator using it.

  10. Using 210Pb measurements to estimate sedimentation rates on river floodplains

    International Nuclear Information System (INIS)

    Du, P.; Walling, D.E.

    2012-01-01

    Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides 137 Cs and excess 210 Pb to estimate medium-term (10–10 2 years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of 137 Cs. However, the use of excess 210 Pb potentially offers a number of advantages over 137 Cs measurements. Most existing investigations that have used excess 210 Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess 210 Pb and 137 Cs were made on these cores. The 210 Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The 137 Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the 210 Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total 210 Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by the 137 Cs and excess 210 Pb

  11. Training anesthesiology residents in providing anesthesia for awake craniotomy: learning curves and estimate of needed case load.

    Science.gov (United States)

    Bilotta, Federico; Titi, Luca; Lanni, Fabiana; Stazi, Elisabetta; Rosa, Giovanni

    2013-08-01

    To measure the learning curves of residents in anesthesiology in providing anesthesia for awake craniotomy, and to estimate the case load needed to achieve a "good-excellent" level of competence. Prospective study. Operating room of a university hospital. 7 volunteer residents in anesthesiology. Residents underwent a dedicated training program of clinical characteristics of anesthesia for awake craniotomy. The program was divided into three tasks: local anesthesia, sedation-analgesia, and intraoperative hemodynamic management. The learning curve for each resident for each task was recorded over 10 procedures. Quantitative assessment of the individual's ability was based on the resident's self-assessment score and the attending anesthesiologist's judgment, and rated by modified 12 mm Likert scale, reported ability score visual analog scale (VAS). This ability VAS score ranged from 1 to 12 (ie, very poor, mild, moderate, sufficient, good, excellent). The number of requests for advice also was recorded (ie, resident requests for practical help and theoretical notions to accomplish the procedures). Each task had a specific learning rate; the number of procedures necessary to achieve "good-excellent" ability with confidence, as determined by the recorded results, were 10 procedures for local anesthesia, 15 to 25 procedures for sedation-analgesia, and 20 to 30 procedures for intraoperative hemodynamic management. Awake craniotomy is an approach used increasingly in neuroanesthesia. A dedicated training program based on learning specific tasks and building confidence with essential features provides "good-excellent" ability. © 2013 Elsevier Inc. All rights reserved.

  12. Attitude Estimation of Skis in Ski Jumping Using Low-Cost Inertial Measurement Units

    Directory of Open Access Journals (Sweden)

    Xiang Fang

    2018-02-01

    Full Text Available This paper presents an approach to estimate the attitude of skis for an entire ski jump using wearable, MEMS-based, low-cost Inertial Measurement Units (IMUs. First of all, a kinematic attitude model based on rigid-body dynamics and a sensor error model considering bias and scale factor error are established. Then, an extended Rauch-Tung-Striebel (RTS smoother is used to combine measurement data provided by both gyroscope and magnetometer to achieve an attitude estimation. Moreover, parameters for the bias and scale factor error in the sensor error model and the initial attitude are determined via a maximum-likelihood principle based parameter estimation algorithm. By implementing this approach, an attitude estimation of skis is achieved without further sensor calibration. Finally, results based on both the simulated reference data and the real experimental measurement data are presented, which proves the practicability and the validity of the proposed approach.

  13. Do wavelet filters provide more accurate estimates of reverberation times at low frequencies

    DEFF Research Database (Denmark)

    Sobreira Seoane, Manuel A.; Pérez Cabo, David; Agerkvist, Finn T.

    2016-01-01

    It has been amply demonstrated in the literature that it is not possible to measure acoustic decays without significant errors for low BT values (narrow filters and or low reverberation times). Recently, it has been shown how the main source of distortion in the time envelope of the acoustic deca...

  14. An extended set-value observer for position estimation using single range measurements

    DEFF Research Database (Denmark)

    Marcal, Jose; Jouffroy, Jerome; Fossen, Thor I.

    the observability of the system is briefly discussed and an extended set-valued observer is presented, with some discussion about the effect of the measurements noise on the final solution. This observer estimates bounds in the errors assuming that the exogenous signals are bounded, providing a safe region......The ability of estimating the position of an underwater vehicle from single range measurements is important in applications where one transducer marks an important geographical point, when there is a limitation in the size or cost of the vehicle, or when there is a failure in a system...... of transponders. The knowledge of the bearing of the vehicle and the range measurements from a single location can provide a solution which is sensitive to the trajectory that the vehicle is following, since there is no complete constraint on the position estimate with a single beacon. In this paper...

  15. Application of a virtual coordinate measuring machine for measurement uncertainty estimation of aspherical lens parameters

    International Nuclear Information System (INIS)

    Küng, Alain; Meli, Felix; Nicolet, Anaïs; Thalmann, Rudolf

    2014-01-01

    Tactile ultra-precise coordinate measuring machines (CMMs) are very attractive for accurately measuring optical components with high slopes, such as aspheres. The METAS µ-CMM, which exhibits a single point measurement repeatability of a few nanometres, is routinely used for measurement services of microparts, including optical lenses. However, estimating the measurement uncertainty is very demanding. Because of the many combined influencing factors, an analytic determination of the uncertainty of parameters that are obtained by numerical fitting of the measured surface points is almost impossible. The application of numerical simulation (Monte Carlo methods) using a parametric fitting algorithm coupled with a virtual CMM based on a realistic model of the machine errors offers an ideal solution to this complex problem: to each measurement data point, a simulated measurement variation calculated from the numerical model of the METAS µ-CMM is added. Repeated several hundred times, these virtual measurements deliver the statistical data for calculating the probability density function, and thus the measurement uncertainty for each parameter. Additionally, the eventual cross-correlation between parameters can be analyzed. This method can be applied for the calibration and uncertainty estimation of any parameter of the equation representing a geometric element. In this article, we present the numerical simulation model of the METAS µ-CMM and the application of a Monte Carlo method for the uncertainty estimation of measured asphere parameters. (paper)

  16. RNM and CRITER projects: providing access to environmental radioactivity measurements during crisis and in peacetime

    Energy Technology Data Exchange (ETDEWEB)

    Leprieur, F.; Couvez, C.; Manificat, G. [Institut de radioprotection et de surete nucleaire (France)

    2014-07-01

    The multiplicity of actors and sources of information makes it difficult to centralize environmental radioactivity measurements and to provide access to experts and policy makers, but also to the general public. In the event of a radiological accident, many additional measures will also be carried out in the field by those involved in crisis management. In order to answer this problem, two projects were launched by IRSN with the aim of developing tools to centralize information on environmental radioactivity in normal situation (RNM project: National network of radioactive measurements) and during radiological crisis (CRITER project: Crisis and field). The RNM's mission is to contribute to the estimation of doses from ionizing radiation to which people are exposed and to inform the public. In order to achieve this goal, this network collects and makes available to the public the results of measurements of environmental radioactivity obtained in a normal situation by the French stakeholders. More than 18,000 measurements are transmitted each month by all producers to the RNM. After more than 4 years of operation, the database contains nearly 1,200,000 results. The opening in 2010 of the public web site (www.mesure-radioactivite.fr) was also a major step forward toward transparency and information. In case of radiological emergency, IRSN's mission is to centralize and process at the national level, in a database, all the results of measurements or analysis by all stakeholders throughout the crisis, in order to precisely determine the radiological situation of the environment, before, during and after the event. The project CRITER therefore involves the collection of all possible data from all potential sources, transmission, organization, and the publication of the measurements in crisis or post-accident situation. The emergency nature of the situation requires a transmission in near real-time data, facilitated by the development of automatic sensors. For

  17. Accuracy of the visual estimation method as a predictor of food intake in Alzheimer's patients provided with different types of food.

    Science.gov (United States)

    Amano, Nobuko; Nakamura, Tomiyo

    2018-02-01

    The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  18. Estimation of road profile variability from measured vehicle responses

    Science.gov (United States)

    Fauriat, W.; Mattrand, C.; Gayton, N.; Beakou, A.; Cembrzynski, T.

    2016-05-01

    When assessing the statistical variability of fatigue loads acting throughout the life of a vehicle, the question of the variability of road roughness naturally arises, as both quantities are strongly related. For car manufacturers, gathering information on the environment in which vehicles evolve is a long and costly but necessary process to adapt their products to durability requirements. In the present paper, a data processing algorithm is proposed in order to estimate the road profiles covered by a given vehicle, from the dynamic responses measured on this vehicle. The algorithm based on Kalman filtering theory aims at solving a so-called inverse problem, in a stochastic framework. It is validated using experimental data obtained from simulations and real measurements. The proposed method is subsequently applied to extract valuable statistical information on road roughness from an existing load characterisation campaign carried out by Renault within one of its markets.

  19. Neutron H*(10) estimation and measurements around 18MV linac.

    Science.gov (United States)

    Cerón Ramírez, Pablo Víctor; Díaz Góngora, José Antonio Irán; Paredes Gutiérrez, Lydia Concepción; Rivera Montalvo, Teodoro; Vega Carrillo, Héctor René

    2016-11-01

    Thermoluminescent dosimetry, analytical techniques and Monte Carlo calculations were used to estimate the dose of neutron radiation in a treatment room with a linear electron accelerator of 18MV. Measurements were carried out through neutron ambient dose monitors which include pairs of thermoluminescent dosimeters TLD 600 ( 6 LiF: Mg, Ti) and TLD 700 ( 7 LiF: Mg, Ti), which were placed inside a paraffin spheres. The measurements has allowed to use NCRP 151 equations, these expressions are useful to find relevant dosimetric quantities. In addition, photoneutrons produced by linac head were calculated through MCNPX code taking into account the geometry and composition of the linac head principal parts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Estimating the cost of skin cancer detection by dermatology providers in a large health care system.

    Science.gov (United States)

    Matsumoto, Martha; Secrest, Aaron; Anderson, Alyce; Saul, Melissa I; Ho, Jonhan; Kirkwood, John M; Ferris, Laura K

    2018-04-01

    Data on the cost and efficiency of skin cancer detection through total body skin examination are scarce. To determine the number needed to screen (NNS) and biopsy (NNB) and cost per skin cancer diagnosed in a large dermatology practice in patients undergoing total body skin examination. This is a retrospective observational study. During 2011-2015, a total of 20,270 patients underwent 33,647 visits for total body skin examination; 9956 lesion biopsies were performed yielding 2763 skin cancers, including 155 melanomas. The NNS to detect 1 skin cancer was 12.2 (95% confidence interval [CI] 11.7-12.6) and 1 melanoma was 215 (95% CI 185-252). The NNB to detect 1 skin cancer was 3.0 (95% CI 2.9-3.1) and 1 melanoma was 27.8 (95% CI 23.3-33.3). In a multivariable model for NNS, age and personal history of melanoma were significant factors. Age switched from a protective factor to a risk factor at 51 years of age. The estimated cost per melanoma detected was $32,594 (95% CI $27,326-$37,475). Data are from a single health care system and based on physician coding. Melanoma detection through total body skin examination is most efficient in patients ≥50 years of age and those with a personal history of melanoma. Our findings will be helpful in modeling the cost effectiveness of melanoma screening by dermatologists. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  1. Estimation of Apollo Lunar Dust Transport using Optical Extinction Measurements

    Science.gov (United States)

    Lane, John E.; Metzger, Philip T.

    2015-04-01

    A technique to estimate mass erosion rate of surface soil during landing of the Apollo Lunar Module (LM) and total mass ejected due to the rocket plume interaction is proposed and tested. The erosion rate is proportional to the product of the second moment of the lofted particle size distribution N(D), and third moment of the normalized soil size distribution S(D), divided by the integral of S(D)ṡD2/v(D), where D is particle diameter and v(D) is the vertical component of particle velocity. The second moment of N(D) is estimated by optical extinction analysis of the Apollo cockpit video. Because of the similarity between mass erosion rate of soil as measured by optical extinction and rainfall rate as measured by radar reflectivity, traditional NWS radar/rainfall correlation methodology can be applied to the lunar soil case where various S(D) models are assumed corresponding to specific lunar sites.

  2. Estimating Jupiter’s Gravity Field Using Juno Measurements, Trajectory Estimation Analysis, and a Flow Model Optimization

    International Nuclear Information System (INIS)

    Galanti, Eli; Kaspi, Yohai; Durante, Daniele; Finocchiaro, Stefano; Iess, Luciano

    2017-01-01

    The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulated Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.

  3. Estimating Jupiter’s Gravity Field Using Juno Measurements, Trajectory Estimation Analysis, and a Flow Model Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Galanti, Eli; Kaspi, Yohai [Department of Earth and Planetary Sciences, Weizmann Institute of Science, Rehovot (Israel); Durante, Daniele; Finocchiaro, Stefano; Iess, Luciano, E-mail: eli.galanti@weizmann.ac.il [Dipartimento di Ingegneria Meccanica e Aerospaziale, Sapienza Universita di Roma, Rome (Italy)

    2017-07-01

    The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulated Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.

  4. Permeability estimation from NMR diffusion measurements in reservoir rocks.

    Science.gov (United States)

    Balzarini, M; Brancolini, A; Gossenberg, P

    1998-01-01

    It is well known that in restricted geometries, such as in porous media, the apparent diffusion coefficient (D) of the fluid depends on the observation time. From the time dependence of D, interesting information can be derived to characterise geometrical features of the porous media that are relevant in oil industry applications. In particular, the permeability can be related to the surface-to-volume ratio (S/V), estimated from the short time behaviour of D(t), and to the connectivity of the pore space, which is probed by the long time behaviour of D(t). The stimulated spin-echo pulse sequence, with pulsed magnetic field gradients, has been used to measure the diffusion coefficients on various homogeneous and heterogeneous sandstone samples. It is shown that the petrophysical parameters obtained by our measurements are in good agreement with those yielded by conventional laboratory techniques (gas permeability and electrical conductivity). Although the diffusing time is limited by T1, eventually preventing an observation of the real asymptotic behaviour, and the surface-to-volume ratio measured by nuclear magnetic resonance is different from the value obtained by BET because of the different length scales probed, the measurement remains reliable and low-time consuming.

  5. Estimation of Penetrated Bone Layers During Craniotomy via Bioimpedance Measurement.

    Science.gov (United States)

    Teichmann, Daniel; Rohe, Lucas; Niesche, Annegret; Mueller, Meiko; Radermacher, Klaus; Leonhardt, Steffen

    2017-04-01

    Craniotomy is the removal of a bone flap from the skull and is a first step in many neurosurgical interventions. During craniotomy, an efficient cut of the bone without injuring adjoining soft tissues is very critical. The aim of this study is to investigate the feasibility of estimating the currently penetrated cranial bone layer by means of bioimpedance measurement. A finite-element model was developed and a simulation study conducted. Simulations were performed at different positions along an elliptical cutting path and at three different operation areas. Finally, the validity of the simulation was demonstrated by an ex vivo experiment based on use of a bovine shoulder blade bone and a commercially available impedance meter. The curve of the absolute impedance and phase exhibits characteristic changes at the transition from one bone layer to the next, which can be used to determine the bone layer last penetrated by the cutting tool. The bipolar electrode configuration is superior to the monopolar measurement. A horizontal electrode arrangement at the tip of the cutting tool produces the best results. This study successfully demonstrates the feasibility to detect the transition between cranial bone layers during craniotomy by bioimpedance measurements using electrodes located on the cutting tool. Based on the results of this study, bioimpedance measurement seems to be a promising option for intra operative ad hoc information about the bone layer currently penetrated and could contribute to patient safety during neurosurgery.

  6. Estimated Nutritive Value of Low-Price Model Lunch Sets Provided to Garment Workers in Cambodia

    Directory of Open Access Journals (Sweden)

    Jan Makurat

    2017-07-01

    Full Text Available Background: The establishment of staff canteens is expected to improve the nutritional situation of Cambodian garment workers. The objective of this study is to assess the nutritive value of low-price model lunch sets provided at a garment factory in Phnom Penh, Cambodia. Methods: Exemplary lunch sets were served to female workers through a temporary canteen at a garment factory in Phnom Penh. Dish samples were collected repeatedly to examine mean serving sizes of individual ingredients. Food composition tables and NutriSurvey software were used to assess mean amounts and contributions to recommended dietary allowances (RDAs or adequate intake of energy, macronutrients, dietary fiber, vitamin C (VitC, iron, vitamin A (VitA, folate and vitamin B12 (VitB12. Results: On average, lunch sets provided roughly one third of RDA or adequate intake of energy, carbohydrates, fat and dietary fiber. Contribution to RDA of protein was high (46% RDA. The sets contained a high mean share of VitC (159% RDA, VitA (66% RDA, and folate (44% RDA, but were low in VitB12 (29% RDA and iron (20% RDA. Conclusions: Overall, lunches satisfied recommendations of caloric content and macronutrient composition. Sets on average contained a beneficial amount of VitC, VitA and folate. Adjustments are needed for a higher iron content. Alternative iron-rich foods are expected to be better suited, compared to increasing portions of costly meat/fish components. Lunch provision at Cambodian garment factories holds the potential to improve food security of workers, approximately at costs of <1 USD/person/day at large scale. Data on quantitative total dietary intake as well as physical activity among workers are needed to further optimize the concept of staff canteens.

  7. A brute-force spectral approach for wave estimation using measured vessel motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik D.; Brodtkorb, Astrid H.; Sørensen, Asgeir J.

    2018-01-01

    , and the procedure is simple in its mathematical formulation. The actual formulation is extending another recent work by including vessel advance speed and short-crested seas. Due to its simplicity, the procedure is computationally efficient, providing wave spectrum estimates in the order of a few seconds......The article introduces a spectral procedure for sea state estimation based on measurements of motion responses of a ship in a short-crested seaway. The procedure relies fundamentally on the wave buoy analogy, but the wave spectrum estimate is obtained in a direct - brute-force - approach......, and the estimation procedure will therefore be appealing to applications related to realtime, onboard control and decision support systems for safe and efficient marine operations. The procedure's performance is evaluated by use of numerical simulation of motion measurements, and it is shown that accurate wave...

  8. Propagation of measurement accuracy to biomass soft-sensor estimation and control quality.

    Science.gov (United States)

    Steinwandter, Valentin; Zahel, Thomas; Sagmeister, Patrick; Herwig, Christoph

    2017-01-01

    In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.

  9. Pollutant Flux Estimation in an Estuary Comparison between Model and Field Measurements

    Directory of Open Access Journals (Sweden)

    Yen-Chang Chen

    2014-08-01

    Full Text Available This study proposes a framework for estimating pollutant flux in an estuary. An efficient method is applied to estimate the flux of pollutants in an estuary. A gauging station network in the Danshui River estuary is established to measure the data of water quality and discharge based on the efficient method. A boat mounted with an acoustic Doppler profiler (ADP traverses the river along a preselected path that is normal to the streamflow to measure the velocities, water depths and water quality for calculating pollutant flux. To know the characteristics of the estuary and to provide the basis for the pollutant flux estimation model, data of complete tidal cycles is collected. The discharge estimation model applies the maximum velocity and water level to estimate mean velocity and cross-sectional area, respectively. Thus, the pollutant flux of the estuary can be easily computed as the product of the mean velocity, cross-sectional area and pollutant concentration. The good agreement between the observed and estimated pollutant flux of the Danshui River estuary shows that the pollutant measured by the conventional and the efficient methods are not fundamentally different. The proposed method is cost-effective and reliable. It can be used to estimate pollutant flux in an estuary accurately and efficiently.

  10. Lake Evaporation in a Hyper-Arid Environment, Northwest of China—Measurement and Estimation

    OpenAIRE

    Xiao Liu; Jingjie Yu; Ping Wang; Yichi Zhang; Chaoyang Du

    2016-01-01

    Lake evaporation is a critical component of the hydrological cycle. Quantifying lake evaporation in hyper-arid regions by measurement and estimation can both provide reliable potential evaporation (ET0) reference and promote a deeper understanding of the regional hydrological process and its response towards changing climate. We placed a floating E601 evaporation pan on East Juyan Lake, which is representative of arid regions’ terminal lakes, to measure daily evaporation and conducted simulta...

  11. Sound Power Estimation by Laser Doppler Vibration Measurement Techniques

    Directory of Open Access Journals (Sweden)

    G.M. Revel

    1998-01-01

    Full Text Available The aim of this paper is to propose simple and quick methods for the determination of the sound power emitted by a vibrating surface, by using non-contact vibration measurement techniques. In order to calculate the acoustic power by vibration data processing, two different approaches are presented. The first is based on the method proposed in the Standard ISO/TR 7849, while the second is based on the superposition theorem. A laser-Doppler scanning vibrometer has been employed for vibration measurements. Laser techniques open up new possibilities in this field because of their high spatial resolution and their non-intrusivity. The technique has been applied here to estimate the acoustic power emitted by a loudspeaker diaphragm. Results have been compared with those from a commercial Boundary Element Method (BEM software and experimentally validated by acoustic intensity measurements. Predicted and experimental results seem to be in agreement (differences lower than 1 dB thus showing that the proposed techniques can be employed as rapid solutions for many practical and industrial applications. Uncertainty sources are addressed and their effect is discussed.

  12. Using ²¹⁰Pb measurements to estimate sedimentation rates on river floodplains.

    Science.gov (United States)

    Du, P; Walling, D E

    2012-01-01

    Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides ¹³⁷Cs and excess ²¹⁰Pb to estimate medium-term (10-10² years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of ¹³⁷Cs. However, the use of excess ²¹⁰Pb potentially offers a number of advantages over ¹³⁷Cs measurements. Most existing investigations that have used excess ²¹⁰Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess ²¹⁰Pb and ¹³⁷Cs were made on these cores. The ²¹⁰Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The ¹³⁷Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the ²¹⁰Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total ²¹⁰Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by

  13. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  14. Measures to reduce car-fleet consumption - Estimation of effects

    International Nuclear Information System (INIS)

    Iten, R.; Hammer, S.; Keller, M.; Schmidt, N.; Sammer, K.; Wuestenhagen, R.

    2005-09-01

    This comprehensive report for the Swiss Federal Office of Energy (SFOE) takes a look at the results of a study that estimated the effects of measures that were to be taken in order to reduce the fuel consumption of fleets of vehicles as part of the SwissEnergy programme. The research reported on aimed to estimate the effects of the Energy Label on energy consumption and research concerning the results to be expected from the introduction of a bonus-malus system. Questions reviewed include the effect of fuel consumption data on making decisions concerning which vehicle to purchase, the effects of the Energy Label on consumption, the awareness of other appropriate information sources, the possible effects of a bonus-malus system and how the effectiveness of the Energy Label could be improved. The answers and results obtained are reviewed and commented on. Finally, an overall appraisal of the situation is presented and recommendations for increasing the effectiveness of the Energy Label are made

  15. Estimation of diffuse from measured global solar radiation

    International Nuclear Information System (INIS)

    Moriarty, W.W.

    1991-01-01

    A data set of quality controlled radiation observations from stations scattered throughout Australia was formed and further screened to remove residual doubtful observations. It was then divided into groups by solar elevation, and used to find average relationships for each elevation group between relative global radiation (clearness index - the measured global radiation expressed as a proportion of the radiation on a horizontal surface at the top of the atmosphere) and relative diffuse radiation. Clear-cut relationships were found, which were then fitted by polynomial expressions giving the relative diffuse radiation as a function of relative global radiation and solar elevation. When these expressions were used to estimate the diffuse radiation from the global, the results had a slightly smaller spread of errors than those from an earlier technique given by Spencer. It was found that the errors were related to cloud amount, and further relationships were developed giving the errors as functions of global radiation, solar elevation, and the fraction of sky obscured by high cloud and by opaque (low and middle level) cloud. When these relationships were used to adjust the first estimates of diffuse radiation, there was a considerable reduction in the number of large errors

  16. Mathematical modeling for corrosion environment estimation based on concrete resistivity measurement directly above reinforcement

    International Nuclear Information System (INIS)

    Lim, Young-Chul; Lee, Han-Seung; Noguchi, Takafumi

    2009-01-01

    This study aims to formulate a resistivity model whereby the concrete resistivity expressing the environment of steel reinforcement can be directly estimated and evaluated based on measurement immediately above reinforcement as a method of evaluating corrosion deterioration in reinforced concrete structures. It also aims to provide a theoretical ground for the feasibility of durability evaluation by electric non-destructive techniques with no need for chipping of cover concrete. This Resistivity Estimation Model (REM), which is a mathematical model using the mirror method, combines conventional four-electrode measurement of resistivity with geometric parameters including cover depth, bar diameter, and electrode intervals. This model was verified by estimation using this model at areas directly above reinforcement and resistivity measurement at areas unaffected by reinforcement in regard to the assessment of the concrete resistivity. Both results strongly correlated, proving the validity of this model. It is expected to be applicable to laboratory study and field diagnosis regarding reinforcement corrosion. (author)

  17. Covariance-Based Estimation from Multisensor Delayed Measurements with Random Parameter Matrices and Correlated Noises

    Directory of Open Access Journals (Sweden)

    R. Caballero-Águila

    2014-01-01

    Full Text Available The optimal least-squares linear estimation problem is addressed for a class of discrete-time multisensor linear stochastic systems subject to randomly delayed measurements with different delay rates. For each sensor, a different binary sequence is used to model the delay process. The measured outputs are perturbed by both random parameter matrices and one-step autocorrelated and cross correlated noises. Using an innovation approach, computationally simple recursive algorithms are obtained for the prediction, filtering, and smoothing problems, without requiring full knowledge of the state-space model generating the signal process, but only the information provided by the delay probabilities and the mean and covariance functions of the processes (signal, random parameter matrices, and noises involved in the observation model. The accuracy of the estimators is measured by their error covariance matrices, which allow us to analyze the estimator performance in a numerical simulation example that illustrates the feasibility of the proposed algorithms.

  18. Attitude and gyro bias estimation by the rotation of an inertial measurement unit

    International Nuclear Information System (INIS)

    Wu, Zheming; Sun, Zhenguo; Zhang, Wenzeng; Chen, Qiang

    2015-01-01

    In navigation applications, the presence of an unknown bias in the measurement of rate gyros is a key performance-limiting factor. In order to estimate the gyro bias and improve the accuracy of attitude measurement, we proposed a new method which uses the rotation of an inertial measurement unit, which is independent from rigid body motion. By actively changing the orientation of the inertial measurement unit (IMU), the proposed method generates sufficient relations between the gyro bias and tilt angle (roll and pitch) error via ridge body dynamics, and the gyro bias, including the bias that causes the heading error, can be estimated and compensated. The rotation inertial measurement unit method makes the gravity vector measured from the IMU continuously change in a body-fixed frame. By theoretically analyzing the mathematic model, the convergence of the attitude and gyro bias to the true values is proven. The proposed method provides a good attitude estimation using only measurements from an IMU, when other sensors such as magnetometers and GPS are unreliable. The performance of the proposed method is illustrated under realistic robotic motions and the results demonstrate an improvement in the accuracy of the attitude estimation. (paper)

  19. Real-Time Aerodynamic Parameter Estimation without Air Flow Angle Measurements

    Science.gov (United States)

    Morelli, Eugene A.

    2010-01-01

    A technique for estimating aerodynamic parameters in real time from flight data without air flow angle measurements is described and demonstrated. The method is applied to simulated F-16 data, and to flight data from a subscale jet transport aircraft. Modeling results obtained with the new approach using flight data without air flow angle measurements were compared to modeling results computed conventionally using flight data that included air flow angle measurements. Comparisons demonstrated that the new technique can provide accurate aerodynamic modeling results without air flow angle measurements, which are often difficult and expensive to obtain. Implications for efficient flight testing and flight safety are discussed.

  20. Auroral Electrojet Index Designed to Provide a Global Measure, Hourly Intervals, of Auroral Zone Magnetic Activity

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Auroral Electrojet (AE) index is designed to provide a global quantitative measure of auroral zone magnetic activity produced by enhanced ionospheric currents...

  1. Contemporary group estimates adjusted for climatic effects provide a finer definition of the unknown environmental challenges experienced by growing pigs.

    Science.gov (United States)

    Guy, S Z Y; Li, L; Thomson, P C; Hermesch, S

    2017-12-01

    Environmental descriptors derived from mean performances of contemporary groups (CGs) are assumed to capture any known and unknown environmental challenges. The objective of this paper was to obtain a finer definition of the unknown challenges, by adjusting CG estimates for the known climatic effects of monthly maximum air temperature (MaxT), minimum air temperature (MinT) and monthly rainfall (Rain). As the unknown component could include infection challenges, these refined descriptors may help to better model varying responses of sire progeny to environmental infection challenges for the definition of disease resilience. Data were recorded from 1999 to 2013 at a piggery in south-east Queensland, Australia (n = 31,230). Firstly, CG estimates of average daily gain (ADG) and backfat (BF) were adjusted for MaxT, MinT and Rain, which were fitted as splines. In the models used to derive CG estimates for ADG, MaxT and MinT were significant variables. The models that contained these significant climatic variables had CG estimates with a lower variance compared to models without significant climatic variables. Variance component estimates were similar across all models, suggesting that these significant climatic variables accounted for some known environmental variation captured in CG estimates. No climatic variables were significant in the models used to derive the CG estimates for BF. These CG estimates were used to categorize environments. There was no observable sire by environment interaction (Sire×E) for ADG when using the environmental descriptors based on CG estimates on BF. For the environmental descriptors based on CG estimates of ADG, there was significant Sire×E only when MinT was included in the model (p = .01). Therefore, this new definition of the environment, preadjusted by MinT, increased the ability to detect Sire×E. While the unknown challenges captured in refined CG estimates need verification for infection challenges, this may provide a

  2. The Unscented Kalman Filter estimates the plasma insulin from glucose measurement.

    Science.gov (United States)

    Eberle, Claudia; Ament, Christoph

    2011-01-01

    Understanding the simultaneous interaction within the glucose and insulin homeostasis in real-time is very important for clinical treatment as well as for research issues. Until now only plasma glucose concentrations can be measured in real-time. To support a secure, effective and rapid treatment e.g. of diabetes a real-time estimation of plasma insulin would be of great value. A novel approach using an Unscented Kalman Filter that provides an estimate of the current plasma insulin concentration is presented, which operates on the measurement of the plasma glucose and Bergman's Minimal Model of the glucose insulin homeostasis. We can prove that process observability is obtained in this case. Hence, a successful estimator design is possible. Since the process is nonlinear we have to consider estimates that are not normally distributed. The symmetric Unscented Kalman Filter (UKF) will perform best compared to other estimator approaches as the Extended Kalman Filter (EKF), the simplex Unscented Kalman Filter (UKF), and the Particle Filter (PF). The symmetric UKF algorithm is applied to the plasma insulin estimation. It shows better results compared to the direct (open loop) estimation that uses a model of the insulin subsystem. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    Science.gov (United States)

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  4. Improved measurements of RNA structure conservation with generalized centroid estimators

    Directory of Open Access Journals (Sweden)

    Yohei eOkada

    2011-08-01

    Full Text Available Identification of non-protein-coding RNAs (ncRNAs in genomes is acrucial task for not only molecular cell biology but alsobioinformatics. Secondary structures of ncRNAs are employed as a keyfeature of ncRNA analysis since biological functions of ncRNAs aredeeply related to their secondary structures. Although the minimumfree energy (MFE structure of an RNA sequence is regarded as the moststable structure, MFE alone could not be an appropriate measure foridentifying ncRNAs since the free energy is heavily biased by thenucleotide composition. Therefore, instead of MFE itself, severalalternative measures for identifying ncRNAs have been proposed such asthe structure conservation index (SCI and the base pair distance(BPD, both of which employ MFE structures. However, thesemeasurements are unfortunately not suitable for identifying ncRNAs insome cases including the genome-wide search and incur high falsediscovery rate. In this study, we propose improved measurements basedon SCI and BPD, applying generalized centroid estimators toincorporate the robustness against low quality multiple alignments.Our experiments show that our proposed methods achieve higher accuracythan the original SCI and BPD for not only human-curated structuralalignments but also low quality alignments produced by CLUSTALW. Furthermore, the centroid-based SCI on CLUSTAL W alignments is moreaccurate than or comparable with that of the original SCI onstructural alignments generated with RAF, a high quality structuralaligner, for which two-fold expensive computational time is requiredon average. We conclude that our methods are more suitable forgenome-wide alignments which are of low quality from the point of viewon secondary structures than the original SCI and BPD.

  5. Boundary layer height estimation by sodar and sonic anemometer measurements

    International Nuclear Information System (INIS)

    Contini, D; Cava, D; Martano, P; Donateo, A; Grasso, F M

    2008-01-01

    In this paper an analysis of different methods for the calculation of the boundary layer height (BLH) using sodar and ultrasonic anemometer measurements is presented. All the methods used are based on single point surface measurements. In particular the automatic spectral routine developed for Remtech sodar is compared with the results obtained with the parameterization of the vertical velocity variance, with the calculation of a prognostic model and with a parameterization based on horizontal velocity spectra. Results indicate that in unstable conditions the different methods provide similar pattern, with BLH relatively low, even if the parameterization of the vertical velocity variance is affected by a large scatter that limits its efficiency in evaluating the BLH. In stable nocturnal conditions the performances of the Remtech routine are lower with respect to the ones in unstable conditions. The spectral method, applied to sodar or sonic anemometer data, seems to be the most promising in order to develop an efficient routine for BLH determination

  6. Infrared image simulation for estimating the effectiveness of camouflage measures

    Energy Technology Data Exchange (ETDEWEB)

    Jung, J.S. [Seoul National University Graduate School, Seoul (Korea); Kauh, S.K. [Seoul National University, Seoul (Korea); Yoo, H.S. [Soong Sil University, Seoul (Korea)

    1999-08-01

    Camouflage measures in military purpose utilize the apparent temperature difference between target and background, so it is essential to develop a thermal analysis program for apparent temperature predictions and to apply some camouflage measures to real military targets for camouflage purpose. In this study, a thermal analysis program including conduction, convection and radiation is developed and the validity of radiation heat transfer terms is examined. The results show that longwave radiation along with solar radiation should be included in order to predict the apparent temperature as well as the physical temperature precisely. Longwave emissivity variation as an effective surface treatment, such as painting of a less emissive material or camouflage clothing, may provide a temperature similarity or a spatial similarity, resulting in an effective camouflage. (author). 12 refs., 15 figs., 2 tabs.

  7. Optimized Estimation of Surface Layer Characteristics from Profiling Measurements

    Directory of Open Access Journals (Sweden)

    Doreene Kang

    2016-01-01

    Full Text Available New sampling techniques such as tethered-balloon-based measurements or small unmanned aerial vehicles are capable of providing multiple profiles of the Marine Atmospheric Surface Layer (MASL in a short time period. It is desirable to obtain surface fluxes from these measurements, especially when direct flux measurements are difficult to obtain. The profiling data is different from the traditional mean profiles obtained at two or more fixed levels in the surface layer from which surface fluxes of momentum, sensible heat, and latent heat are derived based on Monin-Obukhov Similarity Theory (MOST. This research develops an improved method to derive surface fluxes and the corresponding MASL mean profiles of wind, temperature, and humidity with a least-squares optimization method using the profiling measurements. This approach allows the use of all available independent data. We use a weighted cost function based on the framework of MOST with the cost being optimized using a quasi-Newton method. This approach was applied to seven sets of data collected from the Monterey Bay. The derived fluxes and mean profiles show reasonable results. An empirical bias analysis is conducted using 1000 synthetic datasets to evaluate the robustness of the method.

  8. Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.

    Science.gov (United States)

    Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja

    2015-06-01

    Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.

  9. Quantitative measures of walking and strength provide insight into brain corticospinal tract pathology in multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Nora E Fritz

    2017-01-01

    Quantitative measures of strength and walking are associated with brain corticospinal tract pathology. The addition of these quantitative measures to basic clinical information explains more of the variance in corticospinal tract fractional anisotropy and magnetization transfer ratio than the basic clinical information alone. Outcome measurement for multiple sclerosis clinical trials has been notoriously challenging; the use of quantitative measures of strength and walking along with tract-specific imaging methods may improve our ability to monitor disease change over time, with intervention, and provide needed guidelines for developing more effective targeted rehabilitation strategies.

  10. Direct Measurement of Tree Height Provides Different Results on the Assessment of LiDAR Accuracy

    Directory of Open Access Journals (Sweden)

    Emanuele Sibona

    2016-12-01

    Full Text Available In this study, airborne laser scanning-based and traditional field-based survey methods for tree heights estimation are assessed by using one hundred felled trees as a reference dataset. Comparisons between remote sensing and field-based methods were applied to four circular permanent plots located in the western Italian Alps and established within the Alpine Space project NewFor. Remote sensing (Airborne Laser Scanning, ALS, traditional field-based (indirect measurement, IND, and direct measurement of felled trees (DIR methods were compared by using summary statistics, linear regression models, and variation partitioning. Our results show that tree height estimates by Airborne Laser Scanning (ALS approximated to real heights (DIR of felled trees. Considering the species separately, Larix decidua was the species that showed the smaller mean absolute difference (0.95 m between remote sensing (ALS and direct field (DIR data, followed by Picea abies and Pinus sylvestris (1.13 m and 1.04 m, respectively. Our results cannot be generalized to ALS surveys with low pulses density (<5/m2 and with view angles far from zero (nadir. We observed that the tree heights estimation by laser scanner is closer to actual tree heights (DIR than traditional field-based survey, and this was particularly valid for tall trees with conical shape crowns.

  11. GUM approach to uncertainty estimations for online 220Rn concentration measurements using Lucas scintillation cell

    International Nuclear Information System (INIS)

    Sathyabama, N.

    2014-01-01

    It is now widely recognized that, when all of the known or suspected components of errors have been evaluated and corrected, there still remains an uncertainty, that is, a doubt about how well the result of the measurement represents the value of the quantity being measured. Evaluation of measurement data - Guide to the expression of Uncertainty in Measurement (GUM) is a guidance document, the purpose of which is to promote full information on how uncertainty statements are arrived at and to provide a basis for the international comparison of measurement results. In this paper, uncertainty estimations following GUM guidelines have been made for the measured values of online thoron concentrations using Lucas scintillation cell to prove that the correction for disequilibrium between 220 Rn and 216 Po is significant in online 220 Rn measurements

  12. Seasonal estimates of riparian evapotranspiration using remote and in situ measurements

    Science.gov (United States)

    Goodrich, D.C.; Scott, R.; Qi, J.; Goff, B.; Unkrich, C.L.; Moran, M.S.; Williams, D.; Schaeffer, S.; Snyder, K.; MacNish, R.; Maddock, T.; Pool, D.; Chehbouni, A.; Cooper, D.I.; Eichinger, W.E.; Shuttleworth, W.J.; Kerr, Y.; Marsett, R.; Ni, W.

    2000-01-01

    In many semi-arid basins during extended periods when surface snowmelt or storm runoff is absent, groundwater constitutes the primary water source for human habitation, agriculture and riparian ecosystems. Utilizing regional groundwater models in the management of these water resources requires accurate estimates of basin boundary conditions. A critical groundwater boundary condition that is closely coupled to atmospheric processes and is typically known with little certainty is seasonal riparian evapotranspiration ET). This quantity can often be a significant factor in the basin water balance in semi-arid regions yet is very difficult to estimate over a large area. Better understanding and quantification of seasonal, large-area riparian ET is a primary objective of the Semi-Arid Land-Surface-Atmosphere (SALSA) Program. To address this objective, a series of interdisciplinary experimental Campaigns were conducted in 1997 in the San Pedro Basin in southeastern Arizona. The riparian system in this basin is primarily made up of three vegetation communities: mesquite (Prosopis velutina), sacaton grasses (Sporobolus wrightii), and a cottonwood (Populus fremontii)/willow (Salix goodingii) forest gallery. Micrometeorological measurement techniques were used to estimate ET from the mesquite and grasses. These techniques could not be utilized to estimate fluxes from the cottonwood/willow (C/W) forest gallery due to the height (20-30 m) and non-uniform linear nature of the forest gallery. Short-term (2-4 days) sap flux measurements were made to estimate canopy transpiration over several periods of the riparian growing season. Simultaneous remote sensing measurements were used to spatially extrapolate tree and stand measurements. Scaled C/W stand level sap flux estimates were utilized to calibrate a Penman-Monteith model to enable temporal extrapolation between Synoptic measurement periods. With this model and set of measurements, seasonal riparian vegetation water use

  13. A systematic review of the extent and measurement of healthcare provider racism.

    Science.gov (United States)

    Paradies, Yin; Truong, Mandy; Priest, Naomi

    2014-02-01

    Although considered a key driver of racial disparities in healthcare, relatively little is known about the extent of interpersonal racism perpetrated by healthcare providers, nor is there a good understanding of how best to measure such racism. This paper reviews worldwide evidence (from 1995 onwards) for racism among healthcare providers; as well as comparing existing measurement approaches to emerging best practice, it focuses on the assessment of interpersonal racism, rather than internalized or systemic/institutional racism. The following databases and electronic journal collections were searched for articles published between 1995 and 2012: Medline, CINAHL, PsycInfo, Sociological Abstracts. Included studies were published empirical studies of any design measuring and/or reporting on healthcare provider racism in the English language. Data on study design and objectives; method of measurement, constructs measured, type of tool; study population and healthcare setting; country and language of study; and study outcomes were extracted from each study. The 37 studies included in this review were almost solely conducted in the U.S. and with physicians. Statistically significant evidence of racist beliefs, emotions or practices among healthcare providers in relation to minority groups was evident in 26 of these studies. Although a number of measurement approaches were utilized, a limited range of constructs was assessed. Despite burgeoning interest in racism as a contributor to racial disparities in healthcare, we still know little about the extent of healthcare provider racism or how best to measure it. Studies using more sophisticated approaches to assess healthcare provider racism are required to inform interventions aimed at reducing racial disparities in health.

  14. Measurement and estimation of dew point for SNG. [Comparison of calculated and measured values

    Energy Technology Data Exchange (ETDEWEB)

    Furuyama, Y.

    1974-08-01

    Toho Gas measured and estimated SNG dew points in high-pressure deliveries by calculating the theoretical values by the high-pressure gas-liquid equilibrium theory using the pressure-extrapolation method to reach K = 1, and the BWR method to estimate fugacity, then verifying these values experimentally. The experimental values were measured at 161.7 to 367.5 psi using the conventional static and circulation methods, in addition to a newly developed method consisting of circulating a known composition of gas mixtures, partially freezing them, and monitoring the dew point by observing the droplets on a mirror cooled by blowing liquid nitrogen. Good agreement was found between the calculated and the experimental values.

  15. Influence of the statistical distribution of bioassay measurement errors on the intake estimation

    International Nuclear Information System (INIS)

    Lee, T. Y; Kim, J. K

    2006-01-01

    The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution

  16. Measurements of the solar UVR protection provided by shade structures in New Zealand primary schools.

    Science.gov (United States)

    Gies, Peter; Mackay, Christina

    2004-01-01

    To reduce ultraviolet radiation (UVR) exposure during childhood, shade structures are being erected in primary schools to provide areas where children can more safely undertake outdoor activities. This study to evaluate the effectiveness of existing and purpose built shade structures in providing solar UVR protection was carried out on 29 such structures in 10 schools in New Zealand. Measurements of the direct and scattered solar UVR doses within the central region of the shade structures were made during the school lunch break period using UVR-sensitive polysulfone film badges. These measurements indicate that many of the structures had UVR protection factors (PF) of 4-8, which was sufficient to provide protection during the school lunch hour. However, of the 29 structures examined, only six would meet the suggested requirements of UVR PF greater than 15 required to provide all-day protection.

  17. Estimating shortwave solar radiation using net radiation and meteorological measurements

    Science.gov (United States)

    Shortwave radiation has a wide variety of uses in land-atmosphere interactions research. Actual evapotranspiration estimation that involves stomatal conductance models like Jarvis and Ball-Berry require shortwave radiation to estimate photon flux density. However, in most weather stations, shortwave...

  18. Inclusion estimation from a single electrostatic boundary measurement

    DEFF Research Database (Denmark)

    Karamehmedovic, Mirza; Knudsen, Kim

    2013-01-01

    We present a numerical method for the detection and estimation of perfectly conducting inclusions in conducting homogeneous host media in . The estimation is based on the evaluation of an indicator function that depends on a single pair of Cauchy data (electric potential and current) given at the...

  19. Measuring self-control problems: a structural estimation

    NARCIS (Netherlands)

    Bucciol, A.

    2012-01-01

    We adopt a two-stage Method of Simulated Moments to estimate the preference parameters in a life-cycle consumption-saving model augmented with temptation disutility. Our approach estimates the parameters from the comparison between simulated moments with empirical moments observed in the US Survey

  20. Measuring Provider Attitudes Toward Evidence-Based Practice: Consideration of Organizational Context and Individual Differences

    OpenAIRE

    Aarons, Gregory A.

    2005-01-01

    Mental health provider attitudes toward adoption of innovation in general, and toward evidence-based practice (EBP) in particular, are important in considering how best to disseminate and implement EBPs. This article first explores the role of attitudes in acceptance of innovation and proposes a model of organizational and individual factors that may affect or be affected by attitudes toward adoption of EBP. Next, a recently developed measure of mental health provider attitudes toward adoptio...

  1. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  2. Simultaneous spacecraft orbit estimation and control based on GPS measurements via extended Kalman filter

    Directory of Open Access Journals (Sweden)

    Tamer Mekky Ahmed Habib

    2013-06-01

    Full Text Available The primary aim of this work is to provide simultaneous spacecraft orbit estimation and control based on the global positioning system (GPS measurements suitable for application to the next coming Egyptian remote sensing satellites. Disturbance resulting from earth’s oblateness till the fourth order (i.e., J4 is considered. In addition, aerodynamic drag and random disturbance effects are taken into consideration.

  3. Response distortion in personality measurement: born to deceive, yet capable of providing valid self-assessments?

    Directory of Open Access Journals (Sweden)

    STEPHAN DILCHERT

    2006-09-01

    Full Text Available This introductory article to the special issue of Psychology Science devoted to the subject of “Considering Response Distortion in Personality Measurement for Industrial, Work and Organizational Psychology Research and Practice” presents an overview of the issues of response distortion in personality measurement. It also provides a summary of the other articles published as part of this special issue addressing social desirability, impression management, self-presentation, response distortion, and faking in personality measurement in industrial, work, and organizational settings.

  4. [Measurement and estimation methods and research progress of snow evaporation in forests].

    Science.gov (United States)

    Li, Hui-Dong; Guan, De-Xin; Jin, Chang-Jie; Wang, An-Zhi; Yuan, Feng-Hui; Wu, Jia-Bing

    2013-12-01

    Accurate measurement and estimation of snow evaporation (sublimation) in forests is one of the important issues to the understanding of snow surface energy and water balance, and it is also an essential part of regional hydrological and climate models. This paper summarized the measurement and estimation methods of snow evaporation in forests, and made a comprehensive applicability evaluation, including mass-balance methods (snow water equivalent method, comparative measurements of snowfall and through-snowfall, snow evaporation pan, lysimeter, weighing of cut tree, weighing interception on crown, and gamma-ray attenuation technique) and micrometeorological methods (Bowen-ratio energy-balance method, Penman combination equation, aerodynamics method, surface temperature technique and eddy covariance method). Also this paper reviewed the progress of snow evaporation in different forests and its influencal factors. At last, combining the deficiency of past research, an outlook for snow evaporation rearch in forests was presented, hoping to provide a reference for related research in the future.

  5. Scaling measurements of metabolism in stream ecosystems: challenges and approaches to estimating reaeration

    Science.gov (United States)

    Bowden, W. B.; Parker, S.; Song, C.

    2016-12-01

    Stream ecologists have used various formulations of an oxygen budget approach as a surrogate to measure "whole-stream metabolism" (WSM) of carbon in rivers and streams. Improvements in sensor technologies that provide reliable, high-frequency measurements of dissolved oxygen concentrations in adverse field conditions has made it much easier to acquire the basic data needed to estimate WSM in remote locations over long periods (weeks to months). However, accurate estimates of WSM require reliable measurements or estimates of the reaeration coefficient (k). Small errors in estimates of k can lead to large errors in estimates of gross ecosystem production and ecosystem respiration and so the magnitude of the biological flux of CO2 to or from streams. This is an especially challenging problem in unproductive, oligotrophic streams. Unfortunately, current methods to measure reaeration directly (gas evasion) are expensive, labor-intensive, and time-consuming. As a consequence, there is a substantial mismatch between the time steps at which we can measure reaeration versus most of the other variables required to calculate WSM. As a part of the NSF Arctic Long-Term Ecological Research Project we have refined methods to measure WSM in Arctic streams and found a good relationship between measured k values and those calculated by the Energy Dissipation Model (EDM). Other researchers have also noted that this equation works well for both low- and high-order streams. The EDM is dependent on stream slope (relatively constant) and velocity (which is related to discharge or stage). These variables are easy to measure and can be used to estimate k a high frequency (minutes) over large areas (river networks). As a key part of the NSF MacroSystems Biology SCALER project we calculated WSM for multiple reaches in nested stream networks in six biomes across the United States and Australia. We calculated k by EDM and fitted k via a Bayesian model for WSM. The relationships between

  6. Implementation of a metrology programme to provide traceability for radionuclides activity measurements in the CNEN Radiopharmaceuticals Producers Centers

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Erica A.L. de; Braghirolli, Ana M.S.; Tauhata, Luiz; Gomes, Regio S.; Silva, Carlos J., E-mail: erica@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Delgado, Jose U.; Oliveira, Antonio E.; Iwahara, Akira, E-mail: ealima@ird.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    The commercialization and use of radiopharmaceuticals in Brazil are regulated by Agencia Nacional de Vigilancia Sanitaria (ANVISA) which require Good Manufacturing Practices (GMP) certification for Radiopharmaceuticals Producer Centers. Quality Assurance Program should implement the GMP standards to ensure radiopharmaceuticals have requirements quality to proving its efficiency. Several aspects should be controlled within the Quality Assurance Programs, and one of them is the traceability of the Radionuclides Activity Measurement in radiopharmaceuticals doses. The quality assurance of activity measurements is fundamental to maintain both the efficiency of the nuclear medicine procedures and patient and exposed occupationally individuals safety. The radiation doses received by patients, during the nuclear medicine procedures, is estimated according to administered radiopharmaceuticals quantity. Therefore it is very important either the activity measurements performed in radiopharmaceuticals producer centers (RPC) as the measurements performed in nuclear medicine services are traceable to national standards. This paper aims to present an implementation program to provide traceability to radionuclides activity measurements performed in the dose calibrators(well type ionization chambers) used in Radiopharmaceuticals Producer Center placed in different states in Brazil. The proposed program is based on the principles of GM Pand ISO 17025 standards. According to dose calibrator performance, the RPC will be able to provide consistent, safe and effective radioactivity measurement to the nuclear medicine services. (author)

  7. Performance of the measures of processes of care for adults and service providers in rehabilitation settings.

    Science.gov (United States)

    Bamm, Elena L; Rosenbaum, Peter; Wilkins, Seanne; Stratford, Paul

    2015-01-01

    In recent years, client-centered care has been embraced as a new philosophy of care by many organizations around the world. Clinicians and researchers have identified the need for valid and reliable outcome measures that are easy to use to evaluate success of implementation of new concepts. The current study was developed to complete adaptation and field testing of the companion patient-reported measures of processes of care for adults (MPOC-A) and the service provider self-reflection measure of processes of care for service providers working with adult clients (MPOC-SP(A)). A validation study. In-patient rehabilitation facilities. MPOC-A and measure of processes of care for service providers working with adult clients (MPOC-SP(A)). Three hundred and eighty-four health care providers, 61 patients, and 16 family members completed the questionnaires. Good to excellent internal consistency (0.71-0.88 for health care professionals, 0.82-0.90 for patients, and 0.87-0.94 for family members), as well as moderate to good correlations between domains (0.40-0.78 for health care professionals and 0.52-0.84 for clients) supported internal reliability of the tools. Exploratory factor analysis of the MPOC-SP(A) responses supported the multidimensionality of the questionnaire. MPOC-A and MPOC-SP(A) are valid and reliable tools to assess patient and service-provider accounts, respectively, of the extent to which they experience, or are able to provide, client-centered service. Research should now be undertaken to explore in more detail the relationships between client experience and provider reports of their own behavior.

  8. Visual estimation versus gravimetric measurement of postpartum blood loss: a prospective cohort study.

    Science.gov (United States)

    Al Kadri, Hanan M F; Al Anazi, Bedayah K; Tamim, Hani M

    2011-06-01

    One of the major problems in international literature is how to measure postpartum blood loss with accuracy. We aimed in this research to assess the accuracy of visual estimation of postpartum blood loss (by each of two main health-care providers) compared with the gravimetric calculation method. We carried out a prospective cohort study at King Abdulaziz Medical City, Riyadh, Saudi Arabia between 1 November 2009 and 31 December 2009. All women who were admitted to labor and delivery suite and delivered vaginally were included in the study. Postpartum blood loss was visually estimated by the attending physician and obstetrics nurse and then objectively calculated by a gravimetric machine. Comparison between the three methods of blood loss calculation was carried out. A total of 150 patients were included in this study. There was a significant difference between the gravimetric calculated blood loss and both health-care providers' estimation with a tendency to underestimate the loss by about 30%. The background and seniority of the assessing health-care provider did not affect the accuracy of the estimation. The corrected incidence of postpartum hemorrhage in Saudi Arabia was found to be 1.47%. Health-care providers tend to underestimate the volume of postpartum blood loss by about 30%. Training and continuous auditing of the diagnosis of postpartum hemorrhage is needed to avoid missing cases and thus preventing associated morbidity and mortality.

  9. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  10. Comparison of two different methods for the uncertainty estimation of circle diameter measurements using an optical coordinate measuring machine

    DEFF Research Database (Denmark)

    Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2005-01-01

    This paper deals with the uncertainty estimation of measurements performed on optical coordinate measuring machines (CMMs). Two different methods were used to assess the uncertainty of circle diameter measurements using an optical CMM: the sensitivity analysis developing an uncertainty budget...

  11. Time Domain Frequency Stability Estimation Based On FFT Measurements

    National Research Council Canada - National Science Library

    Chang, P

    2004-01-01

    .... In this paper, the biases of the Fast Fourier transform (FFT) spectral estimate with Hanning window are checked and the resulting unbiased spectral density are used to calculate the Allan variance...

  12. Codon Deviation Coefficient: A novel measure for estimating codon usage bias and its statistical significance

    KAUST Repository

    Zhang, Zhang

    2012-03-22

    Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.

  13. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2012-03-01

    Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.

  14. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of

  15. Factors Affecting Estimated Fetal Weight Measured by Ultrasound

    Directory of Open Access Journals (Sweden)

    Hasan Energin

    2016-06-01

    Full Text Available Objective: In this study, we aimed to evaluate the fac­tors that affect the accuracy of estimated fetal weight in ultrasound. Methods: This study was conducted in 3rd degree hospi­tal antenatal outpatient clinic and perinatology inpatient clinic between June 2011 and January 2012. The data were obtained from 165 pregnant women. Inclusion cri­teria were; no additional diseases, giving birth within 48 hours after ultrasound. The same physician executed all ultrasound process. Age, height, weight, obstetric history and obstetric follow –up findings were recorded. Results: Fetal gender, fetal presentation, presence of meconium in amniotic fluid, maternal parity, did not sig­nificantly affect the accuracy of fetal weight estimation by ultrasound. The mean difference between estimated fetal weight and birth weight was 104.48±84 gr in nullipars and 94.2±81 gr in multipars (p=0.44; mean difference was 98.22±79 gr in male babies and 98.15±86 gr in female babies (p=0.99. Mean difference between estimated fetal weight and birth weight was 96.92±81 gr in babies with cephalic presentation and 110.9±90 gr in babies with breech presentation (p=0.53; this difference was 95.36±79 gr in babies with amniotic fluid with meconium and 98.82± 83 gr in babies with amniotic fluid without me­conium (p=0.83. Conclusion: Fetal weight is estimation is one of key points in the obstetrician’s intrapartum managament. And it is important to make fetal weight estimation accurately. In our study, consistent with literature, we observed that fetal gender; meconium presence in amniotic fluid, fetal presentation, maternal parity does not significantly effect the accuracy of fetal weight estimation by ultrasound.

  16. Kinetic parameter estimation from SPECT cone-beam projection measurements

    International Nuclear Information System (INIS)

    Huesman, Ronald H.; Reutter, Bryan W.; Zeng, G. Larry; Gullberg, Grant T.

    1998-01-01

    Kinetic parameters are commonly estimated from dynamically acquired nuclear medicine data by first reconstructing a dynamic sequence of images and subsequently fitting the parameters to time-activity curves generated from regions of interest overlaid upon the image sequence. Biased estimates can result from images reconstructed using inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system. If the SPECT data are acquired using cone-beam collimators wherein the gantry rotates so that the focal point of the collimators always remains in a plane, additional biases can arise from images reconstructed using insufficient, as well as truncated, projection samples. To overcome these problems we have investigated the estimation of kinetic parameters directly from SPECT cone-beam projection data by modelling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated chest image volume, kinetic parameters were estimated for simple one-compartment models for four myocardial regions of interest. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated cone-beam data had biases ranging between 3-26% and 0-28%, respectively. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Statistical uncertainties of parameter estimates for 10 000 000 events ranged between 0.2-9% for the uptake parameters and between 0.3-6% for the washout parameters. (author)

  17. Measuring Critical Care Providers' Attitudes About Controlled Donation After Circulatory Death.

    Science.gov (United States)

    Rodrigue, James R; Luskin, Richard; Nelson, Helen; Glazier, Alexandra; Henderson, Galen V; Delmonico, Francis L

    2018-06-01

    Unfavorable attitudes and insufficient knowledge about donation after cardiac death among critical care providers can have important consequences for the appropriate identification of potential donors, consistent implementation of donation after cardiac death policies, and relative strength of support for this type of donation. The lack of reliable and valid assessment measures has hampered research to capture providers' attitudes. Design and Research Aims: Using stakeholder engagement and an iterative process, we developed a questionnaire to measure attitudes of donation after cardiac death in critical care providers (n = 112) and examined its psychometric properties. Exploratory factor analysis, internal consistency, and validity analyses were conducted to examine the measure. A 34-item questionnaire consisting of 4 factors (Personal Comfort, Process Satisfaction, Family Comfort, and System Trust) provided the most parsimonious fit. Internal consistency was acceptable for each of the subscales and the total questionnaire (Cronbach α > .70). A strong association between more favorable attitudes overall and knowledge ( r = .43, P donation after cardiac death ( P donation after cardiac death.

  18. Marginalized particle filter for spacecraft attitude estimation from vector measurements

    Institute of Scientific and Technical Information of China (English)

    Yaqiu LIU; Xueyuan JIANG; Guangfu MA

    2007-01-01

    An algorithm based on the marginalized particle filters(MPF)is given in details in this paper to solve the spacecraft attitude estimation problem:attitude and gyro bias estimation using the biased gyro and vector observations.In this algorithm,by marginalizing out the state appearing linearly in the spacecraft model,the Kalman filter is associated with each particle in order to reduce the size of the state space and computational burden.The distribution of attitude vector is approximated by a set of particles and estimated using particle filter,while the estimation of gyro bias is obtained for each one of the attitude particles by applying the Kalman filter.The efficiency of this modified MPF estimator is verified through numerical simulation of a fully actuated rigid body.For comparison,unscented Kalman filter(UKF)is also used to gauge the performance of MPF.The results presented in this paper clearly demonstrate that the MPF is superior to UKF in coping with the nonlinear model.

  19. Kinetic parameter estimation from attenuated SPECT projection measurements

    International Nuclear Information System (INIS)

    Reutter, B.W.; Gullberg, G.T.

    1998-01-01

    Conventional analysis of dynamically acquired nuclear medicine data involves fitting kinetic models to time-activity curves generated from regions of interest defined on a temporal sequence of reconstructed images. However, images reconstructed from the inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system can contain artifacts that lead to biases in the estimated kinetic parameters. To overcome this problem the authors investigated the estimation of kinetic parameters directly from projection data by modeling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated transverse slice, kinetic parameters were estimated for simple one compartment models for three myocardial regions of interest, as well as for the liver. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated data had biases ranging between 1--63%. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Predicted uncertainties (standard deviations) of the parameters obtained for 500,000 detected events ranged between 2--31% for the myocardial uptake parameters and 2--23% for the myocardial washout parameters

  20. Measuring HIV-related stigma among healthcare providers: a systematic review.

    Science.gov (United States)

    Alexandra Marshall, S; Brewington, Krista M; Kathryn Allison, M; Haynes, Tiffany F; Zaller, Nickolas D

    2017-11-01

    In the United States, HIV-related stigma in the healthcare setting is known to affect the utilization of prevention and treatment services. Multiple HIV/AIDS stigma scales have been developed to assess the attitudes and behaviors of the general population in the U.S. towards people living with HIV/AIDS, but fewer scales have been developed to assess HIV-related stigma among healthcare providers. This systematic review aimed to identify and evaluate the measurement tools used to assess HIV stigma among healthcare providers in the U.S. The five studies selected quantitatively assessed the perceived HIV stigma among healthcare providers from the patient or provider perspective, included HIV stigma as a primary outcome, and were conducted in the U.S. These five studies used adapted forms of four HIV stigma scales. No standardized measure was identified. Assessment of HIV stigma among providers is valuable to better understand how this phenomenon may impact health outcomes and to inform interventions aiming to improve healthcare delivery and utilization.

  1. Estimating spacecraft attitude based on in-orbit sensor measurements

    DEFF Research Database (Denmark)

    Jakobsen, Britt; Lyn-Knudsen, Kevin; Mølgaard, Mathias

    2014-01-01

    of 2014/15. To better evaluate the performance of the payload, it is desirable to couple the payload data with the satellite's orientation. With AAUSAT3 already in orbit it is possible to collect data directly from space in order to evaluate the performance of the attitude estimation. An extended kalman...... filter (EKF) is used for quaternion-based attitude estimation. A Simulink simulation environment developed for AAUSAT3, containing a "truth model" of the satellite and the orbit environment, is used to test the performance The performance is tested using different sensor noise parameters obtained both...... from a controlled environment on Earth as well as in-orbit. By using sensor noise parameters obtained on Earth as the expected parameters in the attitude estimation, and simulating the environment using the sensor noise parameters from space, it is possible to assess whether the EKF can be designed...

  2. Annual sediment flux estimates in a tidal strait using surrogate measurements

    Science.gov (United States)

    Ganju, N.K.; Schoellhamer, D.H.

    2006-01-01

    Annual suspended-sediment flux estimates through Carquinez Strait (the seaward boundary of Suisun Bay, California) are provided based on surrogate measurements for advective, dispersive, and Stokes drift flux. The surrogates are landward watershed discharge, suspended-sediment concentration at one location in the Strait, and the longitudinal salinity gradient. The first two surrogates substitute for tidally averaged discharge and velocity-weighted suspended-sediment concentration in the Strait, thereby providing advective flux estimates, while Stokes drift is estimated with suspended-sediment concentration alone. Dispersive flux is estimated using the product of longitudinal salinity gradient and the root-mean-square value of velocity-weighted suspended-sediment concentration as an added surrogate variable. Cross-sectional measurements validated the use of surrogates during the monitoring period. During high freshwater flow advective and dispersive flux were in the seaward direction, while landward dispersive flux dominated and advective flux approached zero during low freshwater flow. Stokes drift flux was consistently in the landward direction. Wetter than average years led to net export from Suisun Bay, while dry years led to net sediment import. Relatively low watershed sediment fluxes to Suisun Bay contribute to net export during the wet season, while gravitational circulation in Carquinez Strait and higher suspended-sediment concentrations in San Pablo Bay (seaward end of Carquinez Strait) are responsible for the net import of sediment during the dry season. Annual predictions of suspended-sediment fluxes, using these methods, will allow for a sediment budget for Suisun Bay, which has implications for marsh restoration and nutrient/contaminant transport. These methods also provide a general framework for estimating sediment fluxes in estuarine environments, where temporal and spatial variability of transport are large. ?? 2006 Elsevier Ltd. All rights

  3. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    Science.gov (United States)

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  4. A direct-measurement technique for estimating discharge-chamber lifetime. [for ion thrusters

    Science.gov (United States)

    Beattie, J. R.; Garvin, H. L.

    1982-01-01

    The use of short-term measurement techniques for predicting the wearout of ion thrusters resulting from sputter-erosion damage is investigated. The laminar-thin-film technique is found to provide high precision erosion-rate data, although the erosion rates are generally substantially higher than those found during long-term erosion tests, so that the results must be interpreted in a relative sense. A technique for obtaining absolute measurements is developed using a masked-substrate arrangement. This new technique provides a means for estimating the lifetimes of critical discharge-chamber components based on direct measurements of sputter-erosion depths obtained during short-duration (approximately 1 hr) tests. Results obtained using the direct-measurement technique are shown to agree with sputter-erosion depths calculated for the plasma conditions of the test. The direct-measurement approach is found to be applicable to both mercury and argon discharge-plasma environments and will be useful for estimating the lifetimes of inert gas and extended performance mercury ion thrusters currently under development.

  5. Representivity of wind measurements for design wind speed estimations

    CSIR Research Space (South Africa)

    Goliger, Adam M

    2013-07-01

    Full Text Available of instrumentation sited according to World Meteorological Organization (WMO) requirements. With the advent of automatic weather station technology several decades ago, wind measurements have become much more cost-effective. While previously wind measurements were...

  6. Accuracy of modal wavefront estimation from eye transverse aberration measurements

    Science.gov (United States)

    Chyzh, Igor H.; Sokurenko, Vyacheslav M.

    2001-01-01

    The influence of random errors in measurement of eye transverse aberrations on the accuracy of reconstructing wave aberration as well as ametropia and astigmatism parameters is investigated. The dependence of mentioned errors on a ratio between the number of measurement points and the number of polynomial coefficients is found for different pupil location of measurement points. Recommendations are proposed for setting these ratios.

  7. Estimation of Aerodynamic Parameters in Conditions of Measurement

    Directory of Open Access Journals (Sweden)

    Htang Om Moung

    2017-01-01

    Full Text Available The paper discusses the problem of aircraft parameter identification in conditions of measurement noises. It is assumed that all the signals involved into the process of identification are subjects to measurement noises, that is measurement random errors normally distributed. The results of simulation are presented which show the relation between the noises standard deviations and the accuracy of identification.

  8. Effects of performance measure implementation on clinical manager and provider motivation.

    Science.gov (United States)

    Damschroder, Laura J; Robinson, Claire H; Francis, Joseph; Bentley, Douglas R; Krein, Sarah L; Rosland, Ann-Marie; Hofer, Timothy P; Kerr, Eve A

    2014-12-01

    Clinical performance measurement has been a key element of efforts to transform the Veterans Health Administration (VHA). However, there are a number of signs that current performance measurement systems used within and outside the VHA may be reaching the point of maximum benefit to care and in some settings, may be resulting in negative consequences to care, including overtreatment and diminished attention to patient needs and preferences. Our research group has been involved in a long-standing partnership with the office responsible for clinical performance measurement in the VHA to understand and develop potential strategies to mitigate the unintended consequences of measurement. Our aim was to understand how the implementation of diabetes performance measures (PMs) influences management actions and day-to-day clinical practice. This is a mixed methods study design based on quantitative administrative data to select study facilities and quantitative data from semi-structured interviews. Sixty-two network-level and facility-level executives, managers, front-line providers and staff participated in the study. Qualitative content analyses were guided by a team-based consensus approach using verbatim interview transcripts. A published interpretive motivation theory framework is used to describe potential contributions of local implementation strategies to unintended consequences of PMs. Implementation strategies used by management affect providers' response to PMs, which in turn potentially undermines provision of high-quality patient-centered care. These include: 1) feedback reports to providers that are dissociated from a realistic capability to address performance gaps; 2) evaluative criteria set by managers that are at odds with patient-centered care; and 3) pressure created by managers' narrow focus on gaps in PMs that is viewed as more punitive than motivating. Next steps include working with VHA leaders to develop and test implementation approaches to help

  9. Measuring self-control problems: a structural estimation

    NARCIS (Netherlands)

    Bucciol, A.

    2009-01-01

    We perform a structural estimation of the preference parameters in a buffer-stock consumption model augmented with temptation disutility. We adopt a two-stage Method of Simulated Moments methodology to match our simulated moments with those observed in the US Survey of Consumer Finances. To identify

  10. Effect of Smart Meter Measurements Data On Distribution State Estimation

    DEFF Research Database (Denmark)

    Pokhrel, Basanta Raj; Nainar, Karthikeyan; Bak-Jensen, Birgitte

    2018-01-01

    in the physical grid can enforce significant stress not only on the communication infrastructure but also in the control algorithms. This paper aims to propose a methodology to analyze needed real time smart meter data from low voltage distribution grids and their applicability in distribution state estimation...

  11. Measuring the quality of provided services for patients with chronic kidney disease.

    Science.gov (United States)

    Bahadori, Mohammadkarim; Raadabadi, Mehdi; Heidari Jamebozorgi, Majid; Salesi, Mahmood; Ravangard, Ramin

    2014-09-01

    The healthcare organizations need to develop and implement quality improvement plans for their survival and success. Measuring quality in the healthcare competitive environment is an undeniable necessity for these organizations and will lead to improved patient satisfaction. This study aimed to measure the quality of provided services for patients with chronic kidney disease in Kerman in 2014. This cross-sectional, descriptive-analytic study was performed from 23 January 2014 to 14 February 2014 in four hemodialysis centers in Kerman. All of the patients on chronic hemodialysis (n = 195) who were referred to these four centers were selected and studied using census method. The required data were collected using the SERVQUAL questionnaire, consisting of two parts: questions related to the patients' demographic characteristics, and 28 items to measure the patients' expectations and perceptions of the five dimensions of service quality, including tangibility, reliability, responsiveness, assurance, and empathy. The collected data were analyzed using SPSS 21.0 through some statistical tests, including independent-samples t test, one-way ANOVA, and paired-samples t test. The results showed that the means of patients' expectations were more than their perceptions of the quality of provided services in all dimensions, which indicated that there were gaps in all dimensions. The highest and lowest means of negative gaps were related to empathy (-0.52 ± 0.48) and tangibility (-0.29 ± 0.51). In addition, among the studied patients' demographic characteristics and the five dimensions of service quality, only the difference between the patients' income levels and the gap in assurance were statistically significant (P expectations of patients on hemodialysis were more than their perceptions of provided services. The healthcare providers and employees should pay more attention to the patients' opinions and comments and use their feedback to solve the workplace problems and

  12. Measurements of the UVR protection provided by hats used at school.

    Science.gov (United States)

    Gies, Peter; Javorniczky, John; Roy, Colin; Henderson, Stuart

    2006-01-01

    The importance of protection against solar ultraviolet radiation (UVR) in childhood has lead to SunSmart policies at Australian schools, in particular primary schools, where children are encouraged and in many cases required to wear hats at school. Hat styles change regularly and the UVR protection provided by some of the hat types currently used and recommended for sun protection by the various Australian state cancer councils had not been previously evaluated. The UVR protection of the hats was measured using UVR sensitive polysulphone film badges attached to different facial sites on rotating headforms. The sun protection type hats included in this study were broad-brimmed hats, "bucket hats" and legionnaires hats. Baseball caps, which are very popular, were also included. The broad-brimmed hats and bucket hats provided the most UVR protection for the six different sites about the face and head. Legionnaires hats also provided satisfactory UVR protection, but the caps did not provide UVR protection to many of the facial sites. The highest measured UVR protection factors for facial sites other than the forehead were 8 to 10, indicating that, while some hats can be effective, they need to be used in combination with other forms of UVR protection.

  13. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

    2014-07-15

    previously published pediatric patient doses that accounted for patient size in their dose calculation, and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusions: For organs fully covered within the scan volume, the average correlation of SSDE and organ absolute dose was found to be better than ±10%. In addition, this study provides a complete list of organ dose correlation factors (CF{sub SSDE}{sup organ}) for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.

  14. Accuracy of Standing-Tree Volume Estimates Based on McClure Mirror Caliper Measurements

    Science.gov (United States)

    Noel D. Cost

    1971-01-01

    The accuracy of standing-tree volume estimates, calculated from diameter measurements taken by a mirror caliper and with sectional aluminum poles for height control, was compared with volume estimates calculated from felled-tree measurements. Twenty-five trees which varied in species, size, and form were used in the test. The results showed that two estimates of total...

  15. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  16. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    OpenAIRE

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-01-01

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the li...

  17. Nonparametric Estimation of Regression Parameters in Measurement Error Models

    Czech Academy of Sciences Publication Activity Database

    Ehsanes Saleh, A.K.M.D.; Picek, J.; Kalina, Jan

    2009-01-01

    Roč. 67, č. 2 (2009), s. 177-200 ISSN 0026-1424 Grant - others:GA AV ČR(CZ) IAA101120801; GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z10300504 Keywords : asymptotic relative efficiency(ARE) * asymptotic theory * emaculate mode * Me model * R-estimation * Reliabilty ratio(RR) Subject RIV: BB - Applied Statistics, Operational Research

  18. Measures of serial extremal dependence and their estimation

    DEFF Research Database (Denmark)

    Davis, Richard A.; Mikosch, Thomas Valentin; Zhao, Yuwei

    2013-01-01

    extremal dependence is typically characterized by clusters of exceedances of high thresholds in the series. We start by discussing the notion of extremal index of a univariate sequence, i.e. the reciprocal of the expected cluster size, which has attracted major attention in the extremal value literature...... has attracted attention for modeling and statistical purposes. We apply the extremogram to max-stable processes. Finally, we discuss estimation of the extremogram both in the time and frequency domains....

  19. Seeing the Forest through the Trees: Citizen Scientists Provide Critical Data to Refine Aboveground Carbon Estimates in Restored Riparian Forests

    Science.gov (United States)

    Viers, J. H.

    2013-12-01

    Integrating citizen scientists into ecological informatics research can be difficult due to limited opportunities for meaningful engagement given vast data streams. This is particularly true for analysis of remotely sensed data, which are increasingly being used to quantify ecosystem services over space and time, and to understand how land uses deliver differing values to humans and thus inform choices about future human actions. Carbon storage and sequestration are such ecosystem services, and recent environmental policy advances in California (i.e., AB 32) have resulted in a nascent carbon market that is helping fuel the restoration of riparian forests in agricultural landscapes. Methods to inventory and monitor aboveground carbon for market accounting are increasingly relying on hyperspatial remotely sensed data, particularly the use of light detection and ranging (LiDAR) technologies, to estimate biomass. Because airborne discrete return LiDAR can inexpensively capture vegetation structural differences at high spatial resolution ( 1000 ha), its use is rapidly increasing, resulting in vast stores of point cloud and derived surface raster data. While established algorithms can quantify forest canopy structure efficiently, the highly complex nature of native riparian forests can result in highly uncertain estimates of biomass due to differences in composition (e.g., species richness, age class) and structure (e.g., stem density). This study presents the comparative results of standing carbon estimates refined with field data collected by citizen scientists at three different sites, each capturing a range of agricultural, remnant forest, and restored forest cover types. These citizen science data resolve uncertainty in composition and structure, and improve allometric scaling models of biomass and thus estimates of aboveground carbon. Results indicate that agricultural land and horticulturally restored riparian forests store similar amounts of aboveground carbon

  20. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    Science.gov (United States)

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  1. Process measures or patient reported experience measures (PREMs) for comparing performance across providers? A study of measures related to access and continuity in Swedish primary care.

    Science.gov (United States)

    Glenngård, Anna H; Anell, Anders

    2018-01-01

    Aim To study (a) the covariation between patient reported experience measures (PREMs) and registered process measures of access and continuity when ranking providers in a primary care setting, and (b) whether registered process measures or PREMs provided more or less information about potential linkages between levels of access and continuity and explaining variables. Access and continuity are important objectives in primary care. They can be measured through registered process measures or PREMs. These measures do not necessarily converge in terms of outcomes. Patient views are affected by factors not necessarily reflecting quality of services. Results from surveys are often uncertain due to low response rates, particularly in vulnerable groups. The quality of process measures, on the other hand, may be influenced by registration practices and are often more easy to manipulate. With increased transparency and use of quality measures for management and governance purposes, knowledge about the pros and cons of using different measures to assess the performance across providers are important. Four regression models were developed with registered process measures and PREMs of access and continuity as dependent variables. Independent variables were characteristics of providers as well as geographical location and degree of competition facing providers. Data were taken from two large Swedish county councils. Findings Although ranking of providers is sensitive to the measure used, the results suggest that providers performing well with respect to one measure also tended to perform well with respect to the other. As process measures are easier and quicker to collect they may be looked upon as the preferred option. PREMs were better than process measures when exploring factors that contributed to variation in performance across providers in our study; however, if the purpose of comparison is continuous learning and development of services, a combination of PREMs and

  2. Measuring Healthcare Providers' Performances Within Managed Competition Using Multidimensional Quality and Cost Indicators.

    Science.gov (United States)

    Portrait, France R M; van der Galiën, Onno; Van den Berg, Bernard

    2016-04-01

    The Dutch healthcare system is in transition towards managed competition. In theory, a system of managed competition involves incentives for quality and efficiency of provided care. This is mainly because health insurers contract on behalf of their clients with healthcare providers on, potentially, quality and costs. The paper develops a strategy to comprehensively analyse available multidimensional data on quality and costs to assess and report on the relative performance of healthcare providers within managed competition. We had access to individual information on 2409 clients of 19 Dutch diabetes care groups on a broad range of (outcome and process related) quality and cost indicators. We carried out a cost-consequences analysis and corrected for differences in case mix to reduce incentives for risk selection by healthcare providers. There is substantial heterogeneity between diabetes care groups' performances as measured using multidimensional indicators on quality and costs. Better quality diabetes care can be achieved with lower or higher costs. Routine monitoring using multidimensional data on quality and costs merged at the individual level would allow a systematic and comprehensive analysis of healthcare providers' performances within managed competition. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Refining estimates of public health spending as measured in national health expenditure accounts: the Canadian experience.

    Science.gov (United States)

    Ballinger, Geoff

    2007-01-01

    The recent focus on public health stemming from, among other things, severe acute respiratory syndrome and avian flu has created an imperative to refine health-spending estimates in the Canadian Health Accounts. This article presents the Canadian experience in attempting to address the challenges associated with developing the needed taxonomies for systematically capturing, measuring, and analyzing the national investment in the Canadian public health system. The first phase of this process was completed in 2005, which was a 2-year project to estimate public health spending based on a more classic definition by removing the administration component of the previously combined public health and administration category. Comparing the refined public health estimate with recent data from the Organization for Economic Cooperation and Development still positions Canada with the highest share of total health expenditure devoted to public health than any other country reporting. The article also provides an analysis of the comparability of public health estimates across jurisdictions within Canada as well as a discussion of the recommendations for ongoing improvement of public health spending estimates. The Canadian Institute for Health Information is an independent, not-for-profit organization that provides Canadians with essential statistics and analysis on the performance of the Canadian health system, the delivery of healthcare, and the health status of Canadians. The Canadian Institute for Health Information administers more than 20 databases and registries, including Canada's Health Accounts, which tracks historically 40 categories of health spending by 5 sources of finance for 13 provincial and territorial jurisdictions. Until 2005, expenditure on public health services in the Canadian Health Accounts included measures to prevent the spread of communicable disease, food and drug safety, health inspections, health promotion, community mental health programs, public

  4. Practical estimation of the uncertainty of analytical measurement standards

    NARCIS (Netherlands)

    Peters, R.J.B.; Elbers, I.J.W.; Klijnstra, M.D.; Stolker, A.A.M.

    2011-01-01

    Nowadays, a lot of time and resources are used to determine the quality of goods and services. As a consequence, the quality of measurements themselves, e.g., the metrological traceability of the measured quantity values is essential to allow a proper evaluation of the results with regard to

  5. Estimating Radar Velocity using Direction of Arrival Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Horndt, Volker [General Atomics Aeronautical Systems, Inc., San Diego, CA (United States); Bickel, Douglas Lloyd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Naething, Richard M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Direction of Arrival (DOA) measurements, as with a monopulse antenna, can be compared against Doppler measurements in a Synthetic Aperture Radar ( SAR ) image to determine an aircraft's forward velocity as well as its crab angle, to assist the aircraft's navigation as well as improving high - performance SAR image formation and spatial calibration.

  6. Wind Speed Preview Measurement and Estimation for Feedforward Control of Wind Turbines

    Science.gov (United States)

    Simley, Eric J.

    Wind turbines typically rely on feedback controllers to maximize power capture in below-rated conditions and regulate rotor speed during above-rated operation. However, measurements of the approaching wind provided by Light Detection and Ranging (lidar) can be used as part of a preview-based, or feedforward, control system in order to improve rotor speed regulation and reduce structural loads. But the effectiveness of preview-based control depends on how accurately lidar can measure the wind that will interact with the turbine. In this thesis, lidar measurement error is determined using a statistical frequency-domain wind field model including wind evolution, or the change in turbulent wind speeds between the time they are measured and when they reach the turbine. Parameters of the National Renewable Energy Laboratory (NREL) 5-MW reference turbine model are used to determine measurement error for a hub-mounted circularly-scanning lidar scenario, based on commercially-available technology, designed to estimate rotor effective uniform and shear wind speed components. By combining the wind field model, lidar model, and turbine parameters, the optimal lidar scan radius and preview distance that yield the minimum mean square measurement error, as well as the resulting minimum achievable error, are found for a variety of wind conditions. With optimized scan scenarios, it is found that relatively low measurement error can be achieved, but the attainable measurement error largely depends on the wind conditions. In addition, the impact of the induction zone, the region upstream of the turbine where the approaching wind speeds are reduced, as well as turbine yaw error on measurement quality is analyzed. In order to minimize the mean square measurement error, an optimal measurement prefilter is employed, which depends on statistics of the correlation between the preview measurements and the wind that interacts with the turbine. However, because the wind speeds encountered by

  7. A comparative study of satellite estimation for solar insolation in Albania with ground measurements

    International Nuclear Information System (INIS)

    Mitrushi, Driada; Berberi, Pëllumb; Muda, Valbona; Buzra, Urim; Bërdufi, Irma; Topçiu, Daniela

    2016-01-01

    The main objective of this study is to compare data provided by Database of NASA with available ground data for regions covered by national meteorological net NASA estimates that their measurements of average daily solar radiation have a root-mean-square deviation RMSD error of 35 W/m"2 (roughly 20% inaccuracy). Unfortunately valid data from meteorological stations for regions of interest are quite rare in Albania. In these cases, use of Solar Radiation Database of NASA would be a satisfactory solution for different case studies. Using a statistical method allows to determine most probable margins between to sources of data. Comparison of mean insulation data provided by NASA with ground data of mean insulation provided by meteorological stations show that ground data for mean insolation results, in all cases, to be underestimated compared with data provided by Database of NASA. Converting factor is 1.149.

  8. Estimate of rain evaporation rates from dual-wavelength lidar measurements: comparison against a model analytical solution

    Science.gov (United States)

    Lolli, Simone; Di Girolamo, Paolo; Demoz, Belay; Li, Xiaowen; Welton, Ellsworth J.

    2018-04-01

    Rain evaporation significantly contributes to moisture and heat cloud budgets. In this paper, we illustrate an approach to estimate the median volume raindrop diameter and the rain evaporation rate profiles from dual-wavelength lidar measurements. These observational results are compared with those provided by a model analytical solution. We made use of measurements from the multi-wavelength Raman lidar BASIL.

  9. An angle-dependent estimation of CT x-ray spectrum from rotational transmission measurements

    International Nuclear Information System (INIS)

    Lin, Yuan; Samei, Ehsan; Ramirez-Giraldo, Juan Carlos; Gauthier, Daniel J.; Stierstorfer, Karl

    2014-01-01

    Purpose: Computed tomography (CT) performance as well as dose and image quality is directly affected by the x-ray spectrum. However, the current assessment approaches of the CT x-ray spectrum require costly measurement equipment and complicated operational procedures, and are often limited to the spectrum corresponding to the center of rotation. In order to address these limitations, the authors propose an angle-dependent estimation technique, where the incident spectra across a wide range of angular trajectories can be estimated accurately with only a single phantom and a single axial scan in the absence of the knowledge of the bowtie filter. Methods: The proposed technique uses a uniform cylindrical phantom, made of ultra-high-molecular-weight polyethylene and positioned in an off-centered geometry. The projection data acquired with an axial scan have a twofold purpose. First, they serve as a reflection of the transmission measurements across different angular trajectories. Second, they are used to reconstruct the cross sectional image of the phantom, which is then utilized to compute the intersection length of each transmission measurement. With each CT detector element recording a range of transmission measurements for a single angular trajectory, the spectrum is estimated for that trajectory. A data conditioning procedure is used to combine information from hundreds of collected transmission measurements to accelerate the estimation speed, to reduce noise, and to improve estimation stability. The proposed spectral estimation technique was validated experimentally using a clinical scanner (Somatom Definition Flash, Siemens Healthcare, Germany) with spectra provided by the manufacturer serving as the comparison standard. Results obtained with the proposed technique were compared against those obtained from a second conventional transmission measurement technique with two materials (i.e., Cu and Al). After validation, the proposed technique was applied to measure

  10. Estimation of Apollo lunar dust transport using optical extinction measurements

    OpenAIRE

    Lane, John E.; Metzger, Philip T.

    2015-01-01

    A technique to estimate mass erosion rate of surface soil during landing of the Apollo Lunar Module (LM) and total mass ejected due to the rocket plume interaction is proposed and tested. The erosion rate is proportional to the product of the second moment of the lofted particle size distribution N(D), and third moment of the normalized soil size distribution S(D), divided by the integral of S(D)D^2/v(D), where D is particle diameter and v(D) is the vertical component of particle velocity. Th...

  11. Poverty and Equity: Measurement, Policy, and Estimation with DAD ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2006-01-01

    Jan 1, 2006 ... Part III presents and develops recent methods for testing the ... a unique and broad mix of concepts, measurement methods, statistical tools, software, ... Disease Outbreaks will fund social science, population and public health, ...

  12. Front-Crawl Instantaneous Velocity Estimation Using a Wearable Inertial Measurement Unit

    Directory of Open Access Journals (Sweden)

    Kamiar Aminian

    2012-09-01

    Full Text Available Monitoring the performance is a crucial task for elite sports during both training and competition. Velocity is the key parameter of performance in swimming, but swimming performance evaluation remains immature due to the complexities of measurements in water. The purpose of this study is to use a single inertial measurement unit (IMU to estimate front crawl velocity. Thirty swimmers, equipped with an IMU on the sacrum, each performed four different velocity trials of 25 m in ascending order. A tethered speedometer was used as the velocity measurement reference. Deployment of biomechanical constraints of front crawl locomotion and change detection framework on acceleration signal paved the way for a drift-free integration of forward acceleration using IMU to estimate the swimmers velocity. A difference of 0.6 ± 5.4 cm·s−1 on mean cycle velocity and an RMS difference of 11.3 cm·s−1 in instantaneous velocity estimation were observed between IMU and the reference. The most important contribution of the study is a new practical tool for objective evaluation of swimming performance. A single body-worn IMU provides timely feedback for coaches and sport scientists without any complicated setup or restraining the swimmer’s natural technique.

  13. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-03-01

    Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

  14. Methods for Measuring and Estimating Methane Emission from Ruminants

    Directory of Open Access Journals (Sweden)

    Jørgen Madsen

    2012-04-01

    Full Text Available This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments.

  15. Effects of Measurement Errors on Individual Tree Stem Volume Estimates for the Austrian National Forest Inventory

    Science.gov (United States)

    Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens. Schadauer

    2014-01-01

    National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...

  16. The estimation of uncertainty of radioactivity measurement on gamma counters in radiopharmacy

    International Nuclear Information System (INIS)

    Jovanovic, M.S.; Orlic, M.; Vranjes, S.; Stamenkovic, Lj. . E-mail address of corresponding author: nikijov@vin.bg.ac.yu; Jovanovic, M.S.)

    2005-01-01

    In this paper the estimation of uncertainty of measurement of radioactivity on gamma counter in Laboratory for radioisotopes is presented. The uncertainty components, which are important for these measurements, are identified and taken into account while estimating the uncertainty of measurement.(author)

  17. Estimation and measurement of porosity change in cement paste

    International Nuclear Information System (INIS)

    Lee, Eunyong; Jung, Haeryong; Kwon, Ki-jung; Kim, Do-Gyeum

    2011-01-01

    Laboratory-scale experiments were performed to understand the porosity change of cement pastes. The cement pastes were prepared using commercially available Type-I ordinary Portland cement (OPC). As the cement pastes were exposed in water, the porosity of the cement pastes sharply increased; however, the slow decrease of porosity was observed as the dissolution period was extended more than 50 days. As expected, the dissolution reaction was significantly influenced by w/c ratio and the ionic strength of solution. A thermodynamic model was applied to simulate the porosity change of the cement pastes. It was highly influenced by the depth of the cement pastes. There was porosity increase on the surface of the cement pastes due to dissolution of hydration products, such as portlandite, ettringite, and CSH. However, the decrease of porosity was estimated inside the cement pastes due to the precipitation of cement minerals. (author)

  18. Transport parameter estimation from lymph measurements and the Patlak equation.

    Science.gov (United States)

    Watson, P D; Wolf, M B

    1992-01-01

    Two methods of estimating protein transport parameters for plasma-to-lymph transport data are presented. Both use IBM-compatible computers to obtain least-squares parameters for the solvent drag reflection coefficient and the permeability-surface area product using the Patlak equation. A matrix search approach is described, and the speed and convenience of this are compared with a commercially available gradient method. The results from both of these methods were different from those of a method reported by Reed, Townsley, and Taylor [Am. J. Physiol. 257 (Heart Circ. Physiol. 26): H1037-H1041, 1989]. It is shown that the Reed et al. method contains a systematic error. It is also shown that diffusion always plays an important role for transmembrane transport at the exit end of a membrane channel under all conditions of lymph flow rate and that the statement that diffusion becomes zero at high lymph flow rate depends on a mathematical definition of diffusion.

  19. Estimating Turbulence Statistics and Parameters from Lidar Measurements. Remote Sensing Summer School

    DEFF Research Database (Denmark)

    Sathe, Ameya

    This report is prepared as a written contribution to the Remote Sensing Summer School, that is organized by the Department of Wind Energy, Technical University of Denmark. It provides an overview of the state-of-the-art with regards to estimating turbulence statistics from lidar measurements...... configuration. The so-called velocity Azimuth Display (VAD) and the Doppler Beam Swinging (DBS) methods of post processing the lidar data are investigated in greater details, partly due to their wide use in commercial lidars. It is demonstrated that the VAD or DBS techniques result in introducing significant...

  20. new model for solar radiation estimation from measured air

    African Journals Online (AJOL)

    HOD

    RMSE) and correlation ... countries due to the unavailability of measured data in place [3-5]. ... models were used to predict solar radiation in Nigeria by. [12-15]. However ..... "Comparison of Gene Expression Programming with neuro-fuzzy and ...

  1. Distribution Line Parameter Estimation Under Consideration of Measurement Tolerances

    DEFF Research Database (Denmark)

    Prostejovsky, Alexander; Gehrke, Oliver; Kosek, Anna Magdalena

    2016-01-01

    conductance that the absolute compensated error is −1.05% and −1.07% for both representations, as opposed to the expected uncompensated error of −79.68%. Identification of a laboratory distribution line using real measurement data grid yields a deviation of 6.75% and 4.00%, respectively, from a calculation...

  2. Agreement between estimated and measured heights and weights ...

    African Journals Online (AJOL)

    index (BMI = kg/m2) and require accurate recording of a patient's height and weight.1. In reality, however, patients often cannot stand up straight for accurate height measurement, or are unable to step on a scale. In such cases, height and weight values are often obtained from the patient or their relatives, who either do not ...

  3. Improved flow velocity estimates from moving-boat ADCP measurements

    NARCIS (Netherlands)

    Vermeulen, B.; Hoitink, A.J.F.; Sassi, M.G.

    2014-01-01

    Acoustic Doppler current profilers (ADCPs) are the current standard for flow measurements in large-scale open water systems. Existing techniques to process vessel-mounted ADCP data assume homogeneous or linearly changing flow between the acoustic beams. This assumption is likely to fail but is

  4. Estimating Bandwidth Requirements using Flow-level Measurements

    NARCIS (Netherlands)

    Bruyère, P.; de Oliveira Schmidt, R.; Sperotto, Anna; Sadre, R.; Pras, Aiko

    Bandwidth provisioning is an important task of network management and it is done aiming to meet desired levels of quality of service. Current practices of provisioning are mostly based on rules-of-thumb and use coarse traffic measurements that may lead to problems of under and over dimensioning of

  5. Estimation of Live Weight of Calves from Body Measurements within ...

    African Journals Online (AJOL)

    All phenotypic correlations between body measurements were positive and significant (P<0.001). The highest correlation coefficient was found between chest girth and body weight. The polynomial equation using chest girth as an independent variable predicted body weight more accurately within breed as compared to the ...

  6. Measurement reduction for mutual coupling calibration in DOA estimation

    Science.gov (United States)

    Aksoy, Taylan; Tuncer, T. Engin

    2012-01-01

    Mutual coupling is an important source of error in antenna arrays that should be compensated for super resolution direction-of-arrival (DOA) algorithms, such as Multiple Signal Classification (MUSIC) algorithm. A crucial step in array calibration is the determination of the mutual coupling coefficients for the antenna array. In this paper, a system theoretic approach is presented for the mutual coupling characterization of antenna arrays. The comprehension and implementation of this approach is simple leading to further advantages in calibration measurement reduction. In this context, a measurement reduction method for antenna arrays with omni-directional and identical elements is proposed which is based on the symmetry planes in the array geometry. The proposed method significantly decreases the number of measurements during the calibration process. This method is evaluated using different array types whose responses and the mutual coupling characteristics are obtained through numerical electromagnetic simulations. It is shown that a single calibration measurement is sufficient for uniform circular arrays. Certain important and interesting characteristics observed during the experiments are outlined.

  7. Estimation Of Body Weight From Linear Body Measurements In Two ...

    African Journals Online (AJOL)

    The prediction of body weight from body girth, keel length and thigh length was studied using one hundred Ross and one hundred Anak Titan broilers. Data were collected on the birds from day-old to 9 weeks of age. Body measurement was regressed against body weight at 9 weeks of age using simple linear and ...

  8. Height estimations based on eye measurements throughout a gait cycle

    DEFF Research Database (Denmark)

    Yang, Sylvia X M; Larsen, Peter K; Alkjær, Tine

    2014-01-01

    (EH) measurement, on the other hand, is less prone to concealment. The purpose of the present study was to investigate: (1) how the eye height varies during the gait cycle, and (2) how the eye height changes with head position. The eyes were plotted manually in APAS for 16 test subjects during...

  9. Underwater Acoustic Measurements to Estimate Wind and Rainfall in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    Sara Pensieri

    2015-01-01

    Full Text Available Oceanic ambient noise measurements can be analyzed to obtain qualitative and quantitative information about wind and rainfall phenomena over the ocean filling the existing gap of reliable meteorological observations at sea. The Ligurian Sea Acoustic Experiment was designed to collect long-term synergistic observations from a passive acoustic recorder and surface sensors (i.e., buoy mounted rain gauge and anemometer and weather radar to support error analysis of rainfall rate and wind speed quantification techniques developed in past studies. The study period included combination of high and low wind and rainfall episodes and two storm events that caused two floods in the vicinity of La Spezia and in the city of Genoa in 2011. The availability of high resolution in situ meteorological data allows improving data processing technique to detect and especially to provide effective estimates of wind and rainfall at sea. Results show a very good correspondence between estimates provided by passive acoustic recorder algorithm and in situ observations for both rainfall and wind phenomena and demonstrate the potential of using measurements provided by passive acoustic instruments in open sea for early warning of approaching coastal storms, which for the Mediterranean coastal areas constitutes one of the main causes of recurrent floods.

  10. Saving lives in health: global estimates and country measurement.

    Directory of Open Access Journals (Sweden)

    Daniel Low-Beer

    2013-10-01

    Full Text Available Daniel Low-Beer and colleagues provide a response from The Global Fund on the PLOS Medicine article by David McCoy and colleagues critiquing their lives saved assessment models. Please see later in the article for the Editors' Summary.

  11. Estimation of uncertainty bounds for individual particle image velocimetry measurements from cross-correlation peak ratio

    International Nuclear Information System (INIS)

    Charonko, John J; Vlachos, Pavlos P

    2013-01-01

    Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost. (paper)

  12. A decision tree model to estimate the value of information provided by a groundwater quality monitoring network

    Science.gov (United States)

    Khader, A. I.; Rosenberg, D. E.; McKee, M.

    2013-05-01

    Groundwater contaminated with nitrate poses a serious health risk to infants when this contaminated water is used for culinary purposes. To avoid this health risk, people need to know whether their culinary water is contaminated or not. Therefore, there is a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management options. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI) provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision-maker and the expected outcomes from these alternatives. The alternatives include (i) ignore the health risk of nitrate-contaminated water, (ii) switch to alternative water sources such as bottled water, or (iii) implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, contaminant transport processes, and climate (Khader, 2012). The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine, where methemoglobinemia (blue baby syndrome) is the main health problem associated with the principal contaminant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods) associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not use aquifer water, and whether people get sick from drinking contaminated water. Outcome costs

  13. The health system burden of chronic disease care: an estimation of provider costs of selected chronic diseases in Uganda.

    Science.gov (United States)

    Settumba, Stella Nalukwago; Sweeney, Sedona; Seeley, Janet; Biraro, Samuel; Mutungi, Gerald; Munderi, Paula; Grosskurth, Heiner; Vassall, Anna

    2015-06-01

    To explore the chronic disease services in Uganda: their level of utilisation, the total service costs and unit costs per visit. Full financial and economic cost data were collected from 12 facilities in two districts, from the provider's perspective. A combination of ingredients-based and step-down allocation costing approaches was used. The diseases under study were diabetes, hypertension, chronic obstructive pulmonary disease (COPD), epilepsy and HIV infection. Data were collected through a review of facility records, direct observation and structured interviews with health workers. Provision of chronic care services was concentrated at higher-level facilities. Excluding drugs, the total costs for NCD care fell below 2% of total facility costs. Unit costs per visit varied widely, both across different levels of the health system, and between facilities of the same level. This variability was driven by differences in clinical and drug prescribing practices. Most patients reported directly to higher-level facilities, bypassing nearby peripheral facilities. NCD services in Uganda are underfunded particularly at peripheral facilities. There is a need to estimate the budget impact of improving NCD care and to standardise treatment guidelines. © 2015 The Authors. Tropical Medicine & International Health Published by John Wiley & Sons Ltd.

  14. Evaluating uncertainty estimates in hydrologic models: borrowing measures from the forecast verification community

    Directory of Open Access Journals (Sweden)

    K. J. Franz

    2011-11-01

    Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE, and the Shuffle Complex Evolution Metropolis (SCEM. Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

  15. Providing hierarchical approach for measuring supply chain performance using AHP and DEMATEL methodologies

    Directory of Open Access Journals (Sweden)

    Ali Najmi

    2010-06-01

    Full Text Available Measuring the performance of a supply chain is normally of a function of various parameters. Such a problem often involves in a multiple criteria decision making (MCMD problem where different criteria need to be defined and calculated, properly. During the past two decades, Analytical hierarchy procedure (AHP and DEMATEL have been some of the most popular MCDM approaches for prioritizing various attributes. The study of this paper uses a new methodology which is a combination of AHP and DEMATEL to rank various parameters affecting the performance of the supply chain. The DEMATEL is used for understanding the relationship between comparison metrics and AHP is used for the integration to provide a value for the overall performance.

  16. MEASURING INSTRUMENT CONSTRUCTION AND VALIDATION IN ESTIMATING UNICYCLING SKILL LEVEL

    Directory of Open Access Journals (Sweden)

    Ivan Granić

    2012-09-01

    Full Text Available Riding the unicycle presupposes the knowledge of the set of elements which describe motoric skill, or just part of that set with which we could measure the level of that knowledge. Testing and evaluation of the elements is time consuming. In order to design a unique, composite measuring instrument, to facilitate the evaluation of the initial level of unicycling skill, we tested 17 recreative subjects who were learning to ride the unicycle in 15 hours of training, without any previous knowledge or experience what was measured before the beginning of the training. At the beginning and at the end of the training they were tested with the set of the 12 riding elements test that was carried out to record only successful attempts, followed by unique SLALOM test which include previously tested elements. It was found that the unique SLALOM test has good metric features and a high regression coefficient showed that the SLALOM could be used instead of the 12 elements of unicycle riding skill, and it could be used as a uniform test to evaluate learned or existing knowledge. Because of its simplicity in terms of action and simultaneous testing of more subjects, the newly constructed test could be used in evaluating the unicycling recreational level, but also for monitoring and programming transformation processes to develop the motor skills of riding of unicycle. Because of its advantages, it is desirable to include unicycling in the educational processes of learning new motor skills, which can be evaluated by the results of this research. The obtained results indicate that the unicycle should be seriously consider as a training equipment to “refresh” or expand the recreational programs, without any fear that it is just for special people. Namely, it was shown that the previously learned motor skills (skiing, roller-skating, and cycling had no effect on the results of final testing.

  17. Store turnover as a predictor of food and beverage provider turnover and associated dietary intake estimates in very remote Indigenous communities.

    Science.gov (United States)

    Wycherley, Thomas; Ferguson, Megan; O'Dea, Kerin; McMahon, Emma; Liberato, Selma; Brimblecombe, Julie

    2016-12-01

    Determine how very-remote Indigenous community (RIC) food and beverage (F&B) turnover quantities and associated dietary intake estimates derived from only stores, compare with values derived from all community F&B providers. F&B turnover quantity and associated dietary intake estimates (energy, micro/macronutrients and major contributing food types) were derived from 12-months transaction data of all F&B providers in three RICs (NT, Australia). F&B turnover quantities and dietary intake estimates from only stores (plus only the primary store in multiple-store communities) were expressed as a proportion of complete F&B provider turnover values. Food types and macronutrient distribution (%E) estimates were quantitatively compared. Combined stores F&B turnover accounted for the majority of F&B quantity (98.1%) and absolute dietary intake estimates (energy [97.8%], macronutrients [≥96.7%] and micronutrients [≥83.8%]). Macronutrient distribution estimates from combined stores and only the primary store closely aligned complete provider estimates (≤0.9% absolute). Food types were similar using combined stores, primary store or complete provider turnover. Evaluating combined stores F&B turnover represents an efficient method to estimate total F&B turnover quantity and associated dietary intake in RICs. In multiple-store communities, evaluating only primary store F&B turnover provides an efficient estimate of macronutrient distribution and major food types. © 2016 Public Health Association of Australia.

  18. Metric Indices for Performance Evaluation of a Mixed Measurement based State Estimator

    Directory of Open Access Journals (Sweden)

    Paula Sofia Vide

    2013-01-01

    Full Text Available With the development of synchronized phasor measurement technology in recent years, it gains great interest the use of PMU measurements to improve state estimation performances due to their synchronized characteristics and high data transmission speed. The ability of the Phasor Measurement Units (PMU to directly measure the system state is a key over SCADA measurement system. PMU measurements are superior to the conventional SCADA measurements in terms of resolution and accuracy. Since the majority of measurements in existing estimators are from conventional SCADA measurement system, it is hard to be fully replaced by PMUs in the near future so state estimators including both phasor and conventional SCADA measurements are being considered. In this paper, a mixed measurement (SCADA and PMU measurements state estimator is proposed. Several useful measures for evaluating various aspects of the performance of the mixed measurement state estimator are proposed and explained. State Estimator validity, performance and characteristics of the results on IEEE 14 bus test system and IEEE 30 bus test system are presented.

  19. New measure of insulin sensitivity predicts cardiovascular disease better than HOMA estimated insulin resistance.

    Directory of Open Access Journals (Sweden)

    Kavita Venkataraman

    Full Text Available CONTEXT: Accurate assessment of insulin sensitivity may better identify individuals at increased risk of cardio-metabolic diseases. OBJECTIVES: To examine whether a combination of anthropometric, biochemical and imaging measures can better estimate insulin sensitivity index (ISI and provide improved prediction of cardio-metabolic risk, in comparison to HOMA-IR. DESIGN AND PARTICIPANTS: Healthy male volunteers (96 Chinese, 80 Malay, 77 Indian, 21 to 40 years, body mass index 18-30 kg/m(2. Predicted ISI (ISI-cal was generated using 45 randomly selected Chinese through stepwise multiple linear regression, and validated in the rest using non-parametric correlation (Kendall's tau τ. In an independent longitudinal cohort, ISI-cal and HOMA-IR were compared for prediction of diabetes and cardiovascular disease (CVD, using ROC curves. SETTING: The study was conducted in a university academic medical centre. OUTCOME MEASURES: ISI measured by hyperinsulinemic euglycemic glucose clamp, along with anthropometric measurements, biochemical assessment and imaging; incident diabetes and CVD. RESULTS: A combination of fasting insulin, serum triglycerides and waist-to-hip ratio (WHR provided the best estimate of clamp-derived ISI (adjusted R(2 0.58 versus 0.32 HOMA-IR. In an independent cohort, ROC areas under the curve were 0.77±0.02 ISI-cal versus 0.76±0.02 HOMA-IR (p>0.05 for incident diabetes, and 0.74±0.03 ISI-cal versus 0.61±0.03 HOMA-IR (p<0.001 for incident CVD. ISI-cal also had greater sensitivity than defined metabolic syndrome in predicting CVD, with a four-fold increase in the risk of CVD independent of metabolic syndrome. CONCLUSIONS: Triglycerides and WHR, combined with fasting insulin levels, provide a better estimate of current insulin resistance state and improved identification of individuals with future risk of CVD, compared to HOMA-IR. This may be useful for estimating insulin sensitivity and cardio-metabolic risk in clinical and

  20. Estimates of CO2 traffic emissions from mobile concentration measurements

    Science.gov (United States)

    Maness, H. L.; Thurlow, M. E.; McDonald, B. C.; Harley, R. A.

    2015-03-01

    We present data from a new mobile system intended to aid in the design of upcoming urban CO2-monitoring networks. Our collected data include GPS probe data, video-derived traffic density, and accurate CO2 concentration measurements. The method described here is economical, scalable, and self-contained, allowing for potential future deployment in locations without existing traffic infrastructure or vehicle fleet information. Using a test data set collected on California Highway 24 over a 2 week period, we observe that on-road CO2 concentrations are elevated by a factor of 2 in congestion compared to free-flow conditions. This result is found to be consistent with a model including vehicle-induced turbulence and standard engine physics. In contrast to surface concentrations, surface emissions are found to be relatively insensitive to congestion. We next use our model for CO2 concentration together with our data to independently derive vehicle emission rate parameters. Parameters scaling the leading four emission rate terms are found to be within 25% of those expected for a typical passenger car fleet, enabling us to derive instantaneous emission rates directly from our data that compare generally favorably to predictive models presented in the literature. The present results highlight the importance of high spatial and temporal resolution traffic data for interpreting on- and near-road concentration measurements. Future work will focus on transport and the integration of mobile platforms into existing stationary network designs.

  1. Estimation of complete temperature fields from measured temperatures

    International Nuclear Information System (INIS)

    Clegg, S.T.; Roemer, R.B.

    1984-01-01

    In hyperthermia treatments, it is desirable to be able to predict complete tissue temperature fields from sampled temperatures taken at a few locations. This is a difficult problem in hyperthermia treatments since the tissue blood perfusion is unknown. An initial attempt to do this automatically using unconstrained optimization techniques to minimize the differences between steady state temperatures measured during a treatment and temperatures (at the same locations) predicted from treatment simulations has been previously reported. A second technique using transient temperatures following a step decrease in power has been developed. This technique, which appears to be able to better predict complete temperature fields is presented and both it and the steady state technique are applied to data from both simulated and experimental hyperthermia treatments. The results of applying the two techniques are compared for one-dimensional situations. One particularly important problem which the transient technique can solve (and the steady state technique does not seem to be able to do as well) is that of predicting the complete temperature field in situations where the true maximum and/or minimum temperatures present are not measured by the available instrumentation

  2. Information-geometric measures estimate neural interactions during oscillatory brain states

    Directory of Open Access Journals (Sweden)

    Yimin eNie

    2014-02-01

    Full Text Available The characterization of functional network structures among multiple neurons is essential to understanding neural information processing. Information geometry (IG, a theory developed for investigating a space of probability distributions has recently been applied to spike-train analysis and has provided robust estimations of neural interactions. Although neural firing in the equilibrium state is often assumed in these studies, in reality, neural activity is non-stationary. The brain exhibits various oscillations depending on cognitive demands or when an animal is asleep. Therefore, the investigation of the IG measures during oscillatory network states is important for testing how the IG method can be applied to real neural data. Using model networks of binary neurons or more realistic spiking neurons, we studied how the single- and pairwise-IG measures were influenced by oscillatory neural activity. Two general oscillatory mechanisms, externally driven oscillations and internally induced oscillations, were considered. In both mechanisms, we found that the single-IG measure was linearly related to the magnitude of the external input, and that the pairwise-IG measure was linearly related to the sum of connection strengths between two neurons. We also observed that the pairwise-IG measure was not dependent on the oscillation frequency. These results are consistent with the previous findings that were obtained under the equilibrium conditions. Therefore, we demonstrate that the IG method provides useful insights into neural interactions under the oscillatory condition that can often be observed in the real brain.

  3. New measure of insulin sensitivity predicts cardiovascular disease better than HOMA estimated insulin resistance.

    Science.gov (United States)

    Venkataraman, Kavita; Khoo, Chin Meng; Leow, Melvin K S; Khoo, Eric Y H; Isaac, Anburaj V; Zagorodnov, Vitali; Sadananthan, Suresh A; Velan, Sendhil S; Chong, Yap Seng; Gluckman, Peter; Lee, Jeannette; Salim, Agus; Tai, E Shyong; Lee, Yung Seng

    2013-01-01

    Accurate assessment of insulin sensitivity may better identify individuals at increased risk of cardio-metabolic diseases. To examine whether a combination of anthropometric, biochemical and imaging measures can better estimate insulin sensitivity index (ISI) and provide improved prediction of cardio-metabolic risk, in comparison to HOMA-IR. Healthy male volunteers (96 Chinese, 80 Malay, 77 Indian), 21 to 40 years, body mass index 18-30 kg/m(2). Predicted ISI (ISI-cal) was generated using 45 randomly selected Chinese through stepwise multiple linear regression, and validated in the rest using non-parametric correlation (Kendall's tau τ). In an independent longitudinal cohort, ISI-cal and HOMA-IR were compared for prediction of diabetes and cardiovascular disease (CVD), using ROC curves. The study was conducted in a university academic medical centre. ISI measured by hyperinsulinemic euglycemic glucose clamp, along with anthropometric measurements, biochemical assessment and imaging; incident diabetes and CVD. A combination of fasting insulin, serum triglycerides and waist-to-hip ratio (WHR) provided the best estimate of clamp-derived ISI (adjusted R(2) 0.58 versus 0.32 HOMA-IR). In an independent cohort, ROC areas under the curve were 0.77±0.02 ISI-cal versus 0.76±0.02 HOMA-IR (p>0.05) for incident diabetes, and 0.74±0.03 ISI-cal versus 0.61±0.03 HOMA-IR (pHOMA-IR. This may be useful for estimating insulin sensitivity and cardio-metabolic risk in clinical and epidemiological settings.

  4. Does the edge effect impact on the measure of spatial accessibility to healthcare providers?

    Science.gov (United States)

    Gao, Fei; Kihal, Wahida; Le Meur, Nolwenn; Souris, Marc; Deguen, Séverine

    2017-12-11

    autocorrelation index and local indicators of spatial autocorrelation) are not really impacted. Our research has revealed minor accessibility variation when edge effect has been considered in a French context. No general statement can be set up because intensity of impact varies according to healthcare provider type, territorial organization and methodology used to measure the accessibility to healthcare. Additional researches are required in order to distinguish what findings are specific to a territory and others common to different countries. It constitute a promising direction to determine more precisely healthcare shortage areas and then to fight against social health inequalities.

  5. Uncertain estimation of activity measurement in Nuclear Medicine

    International Nuclear Information System (INIS)

    Lopez Diaz, A.; Palau, S.P.A.; Cardenas, T.A.I.; Garcia, A.I.; Tulio, H.A.

    2007-01-01

    Full text: Accuracy and precision of dose is mandatory in radiopharmaceutical therapy procedures to guarantee the treatment success. The evaluation of uncertain in dose measurement in NM lab becomes a very important step, moreover if the operational parameters change between different equipment. In order to assure the traceability of activity measurements, and the quality assurance of dose administration, the behavior of two dose calibrator Capintec CRC-15R and PTW Curimentor 3, were studied. Accuracy, precision, linearity of activity response and reproducibility, and the activity uncertain determination with a second lab source were determined. Accuracy was evaluate using Tc 99m and I 131 Pstandard source (Secondary Standard Laboratory), for 10R vial (V) and 5 ml syringe (S) geometry, obtaining 1.3% (V-I 131 ), 1.4% (V -Tc 99m ), 0.8% (S -Tc 99m ) for CRC-15R and 2.3% (V-I 131 ), 1.1%(V-Tc 99m ), 1.0% (S-Tc 99m ) for Curimentor 3. Precision and Reproducibility was calculate using Cs 137 source. The reproducibility of CRC-15R and Curimentor 3 was less than 1.1% and 1.2 % respectively, during all evaluation time. The linearity of activity response was evaluate only for Tc 99m , and the results obtained were CRC 15R (672.97mCi 0.07 mCi, 1.3 % like mayor deviation) and Curimentor 3 (670.91mCi 0.08 mCi; 0.6 % like mayor deviation). To calculate the uncertain was use I 131 sources, the influences of calibration factor, linearity to activity, precision, reproducibility, background, half live time took into account. The typical combine uncertain for I 131 activity of was 2.24% for CRC-15R and 2.41% for Curimentor 3. The results were traceable between two equipment, no statistical significant differences were found for all tests. The equipment have a proper performance in the checked parameters, showing compliance with AIEA and National Authorities recommendation. Conclusion: The two equipmentcan be used in NM services with high level of traceability and confidence, with

  6. A Sensitive Measurement for Estimating Impressions of Image-Contents

    Science.gov (United States)

    Sato, Mie; Matouge, Shingo; Mori, Toshifumi; Suzuki, Noboru; Kasuga, Masao

    We have investigated Kansei Content that appeals maker's intention to viewer's kansei. An SD method is a very good way to evaluate subjective impression of image-contents. However, because the SD method is performed after subjects view the image-contents, it is difficult to examine impression of detailed scenes of the image-contents in real time. To measure viewer's impression of the image-contents in real time, we have developed a Taikan sensor. With the Taikan sensor, we investigate relations among the image-contents, the grip strength and the body temperature. We also explore the interface of the Taikan sensor to use it easily. In our experiment, a horror movie is used that largely affects emotion of the subjects. Our results show that there is a possibility that the grip strength increases when the subjects view a strained scene and that it is easy to use the Taikan sensor without its circle base that is originally installed.

  7. A Study on Parametric Wave Estimation Based on Measured Ship Motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Iseki, Toshio

    2011-01-01

    The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics of the param......The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics...... of the parametric model are discussed by considering the results of a similar estimation concept based on Bayesian modelling. The purpose of the latter comparison is not to favour the one estimation approach to the other but rather to highlight some of the advantages and disadvantages of the two approaches....

  8. Power system low frequency oscillation mode estimation using wide area measurement systems

    Directory of Open Access Journals (Sweden)

    Papia Ray

    2017-04-01

    Full Text Available Oscillations in power systems are triggered by a wide variety of events. The system damps most of the oscillations, but a few undamped oscillations may remain which may lead to system collapse. Therefore low frequency oscillations inspection is necessary in the context of recent power system operation and control. Ringdown portion of the signal provides rich information of the low frequency oscillatory modes which has been taken into analysis. This paper provides a practical case study in which seven signal processing based techniques i.e. Prony Analysis (PA, Fast Fourier Transform (FFT, S-Transform (ST, Wigner-Ville Distribution (WVD, Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT, Hilbert-Huang Transform (HHT and Matrix Pencil Method (MPM were presented for estimating the low frequency modes in a given ringdown signal. Preprocessing of the signal is done by detrending. The application of the signal processing techniques is illustrated using actual wide area measurement systems (WAMS data collected from four different Phasor Measurement Unit (PMU i.e. Dadri, Vindyachal, Kanpur and Moga which are located near the recent disturbance event at the Northern Grid of India. Simulation results show that the seven signal processing technique (FFT, PA, ST, WVD, ESPRIT, HHT and MPM estimates two common oscillatory frequency modes (0.2, 0.5 from the raw signal. Thus, these seven techniques provide satisfactory performance in determining small frequency modes of the signal without losing its valuable property. Also a comparative study of the seven signal processing techniques has been carried out in order to find the best one. It was found that FFT and ESPRIT gives exact frequency modes as compared to other techniques, so they are recommended for estimation of low frequency modes. Further investigations were also carried out to estimate low frequency oscillatory mode with another case study of Eastern Interconnect Phasor Project

  9. Water, sanitation and hygiene interventions for acute childhood diarrhea: a systematic review to provide estimates for the Lives Saved Tool.

    Science.gov (United States)

    Darvesh, Nazia; Das, Jai K; Vaivada, Tyler; Gaffey, Michelle F; Rasanathan, Kumanan; Bhutta, Zulfiqar A

    2017-11-07

    In the Sustainable Development Goals (SDGs) era, there is growing recognition of the responsibilities of non-health sectors in improving the health of children. Interventions to improve access to clean water, sanitation facilities, and hygiene behaviours (WASH) represent key opportunities to improve child health and well-being by preventing the spread of infectious diseases and improving nutritional status. We conducted a systematic review of studies evaluating the effects of WASH interventions on childhood diarrhea in children 0-5 years old. Searches were run up to September 2016. We screened the titles and abstracts of retrieved articles, followed by screening of the full-text reports of relevant studies. We abstracted study characteristics and quantitative data, and assessed study quality. Meta-analyses were performed for similar intervention and outcome pairs. Pooled analyses showed diarrhea risk reductions from the following interventions: point-of-use water filtration (pooled risk ratio (RR): 0.47, 95% confidence interval (CI): 0.36-0.62), point-of-use water disinfection (pooled RR: 0.69, 95% CI: 0.60-0.79), and hygiene education with soap provision (pooled RR: 0.73, 95% CI: 0.57-0.94). Quality ratings were low or very low for most studies, and heterogeneity was high in pooled analyses. Improvements to the water supply and water disinfection at source did not show significant effects on diarrhea risk, nor did the one eligible study examining the effect of latrine construction. Various WASH interventions show diarrhea risk reductions between 27% and 53% in children 0-5 years old, depending on intervention type, providing ample evidence to support the scale-up of WASH in low and middle-income countries (LMICs). Due to the overall low quality of the evidence and high heterogeneity, further research is required to accurately estimate the magnitude of the effects of these interventions in different contexts.

  10. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  11. Oscillation estimates relative to p-homogeneous forms and Kato measures data

    Directory of Open Access Journals (Sweden)

    Marco Biroli

    2006-11-01

    Full Text Available We state pointwise estimate for the positive subsolutions associated to a p-homogeneous form and nonnegative Radon measures data. As a by-product we establish an oscillation’s estimate for the solutions relative to Kato measures data.

  12. Detecting Topological Errors with Pre-Estimation Filtering of Bad Data in Wide-Area Measurements

    DEFF Research Database (Denmark)

    Møller, Jakob Glarbo; Sørensen, Mads; Jóhannsson, Hjörtur

    2017-01-01

    It is expected that bad data and missing topology information will become an issue of growing concern when power system state estimators are to exploit the high measurement reporting rates from phasor measurement units. This paper suggests to design state estimators with enhanced resilience again...

  13. Estimating retained gas volumes in the Hanford tanks using waste level measurements

    International Nuclear Information System (INIS)

    Whitney, P.D.; Chen, G.; Gauglitz, P.A.; Meyer, P.A.; Miller, N.E.

    1997-09-01

    The Hanford site is home to 177 large, underground nuclear waste storage tanks. Safety and environmental concerns surround these tanks and their contents. One such concern is the propensity for the waste in these tanks to generate and trap flammable gases. This report focuses on understanding and improving the quality of retained gas volume estimates derived from tank waste level measurements. While direct measurements of gas volume are available for a small number of the Hanford tanks, the increasingly wide availability of tank waste level measurements provides an opportunity for less expensive (than direct gas volume measurement) assessment of gas hazard for the Hanford tanks. Retained gas in the tank waste is inferred from level measurements -- either long-term increase in the tank waste level, or fluctuations in tank waste level with atmospheric pressure changes. This report concentrates on the latter phenomena. As atmospheric pressure increases, the pressure on the gas in the tank waste increases, resulting in a level decrease (as long as the tank waste is open-quotes softclose quotes enough). Tanks with waste levels exhibiting fluctuations inversely correlated with atmospheric pressure fluctuations were catalogued in an earlier study. Additionally, models incorporating ideal-gas law behavior and waste material properties have been proposed. These models explicitly relate the retained gas volume in the tank with the magnitude of the waste level fluctuations, dL/dP. This report describes how these models compare with the tank waste level measurements

  14. Mammography density estimation with automated volumetic breast density measurement

    International Nuclear Information System (INIS)

    Ko, Su Yeon; Kim, Eun Kyung; Kim, Min Jung; Moon, Hee Jung

    2014-01-01

    To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p 0.001 to 0.015). There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density.

  15. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  16. Measuring factors that influence the utilisation of preventive care services provided by general practitioners in Australia

    Directory of Open Access Journals (Sweden)

    Oldenburg Brian

    2009-12-01

    Full Text Available Abstract Background Relatively little research attention has been given to the development of standardised and psychometrically sound scales for measuring influences relevant to the utilisation of health services. This study aims to describe the development, validation and internal reliability of some existing and new scales to measure factors that are likely to influence utilisation of preventive care services provided by general practitioners in Australia. Methods Relevant domains of influence were first identified from a literature review and formative research. Items were then generated by using and adapting previously developed scales and published findings from these. The new items and scales were pre-tested and qualitative feedback was obtained from a convenience sample of citizens from the community and a panel of experts. Principal Components Analyses (PCA and internal reliability testing (Cronbach's alpha were then conducted for all of the newly adapted or developed scales utilising data collected from a self-administered mailed survey sent to a randomly selected population-based sample of 381 individuals (response rate 65.6 per cent. Results The PCA identified five scales with acceptable levels of internal consistency were: (1 social support (ten items, alpha 0.86; (2 perceived interpersonal care (five items, alpha 0.87, (3 concerns about availability of health care and accessibility to health care (eight items, alpha 0.80, (4 value of good health (five items, alpha 0.79, and (5 attitudes towards health care (three items, alpha 0.75. Conclusion The five scales are suitable for further development and more widespread use in research aimed at understanding the determinants of preventive health services utilisation among adults in the general population.

  17. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  18. Field data provide estimates of effective permeability, fracture spacing, well drainage area and incremental production in gas shales

    KAUST Repository

    Eftekhari, Behzad; Marder, M.; Patzek, Tadeusz

    2018-01-01

    the external unstimulated reservoir. This allows us to estimate for the first time the effective permeability of the unstimulated shale and the spacing of fractures in the stimulated region. From an analysis of wells in the Barnett shale, we find

  19. A procedure for the estimation over time of metabolic fluxes in scenarios where measurements are uncertain and/or insufficient

    Directory of Open Access Journals (Sweden)

    Picó Jesús

    2007-10-01

    Full Text Available Abstract Background An indirect approach is usually used to estimate the metabolic fluxes of an organism: couple the available measurements with known biological constraints (e.g. stoichiometry. Typically this estimation is done under a static point of view. Therefore, the fluxes so obtained are only valid while the environmental conditions and the cell state remain stable. However, estimating the evolution over time of the metabolic fluxes is valuable to investigate the dynamic behaviour of an organism and also to monitor industrial processes. Although Metabolic Flux Analysis can be successively applied with this aim, this approach has two drawbacks: i sometimes it cannot be used because there is a lack of measurable fluxes, and ii the uncertainty of experimental measurements cannot be considered. The Flux Balance Analysis could be used instead, but the assumption of optimal behaviour of the organism brings other difficulties. Results We propose a procedure to estimate the evolution of the metabolic fluxes that is structured as follows: 1 measure the concentrations of extracellular species and biomass, 2 convert this data to measured fluxes and 3 estimate the non-measured fluxes using the Flux Spectrum Approach, a variant of Metabolic Flux Analysis that overcomes the difficulties mentioned above without assuming optimal behaviour. We apply the procedure to a real problem taken from the literature: estimate the metabolic fluxes during a cultivation of CHO cells in batch mode. We show that it provides a reliable and rich estimation of the non-measured fluxes, thanks to considering measurements uncertainty and reversibility constraints. We also demonstrate that this procedure can estimate the non-measured fluxes even when there is a lack of measurable species. In addition, it offers a new method to deal with inconsistency. Conclusion This work introduces a procedure to estimate time-varying metabolic fluxes that copes with the insufficiency of

  20. Effects on the estimated cause-specific mortality fraction of providing physician reviewers with different formats of verbal autopsy data

    Directory of Open Access Journals (Sweden)

    Chow Clara

    2011-08-01

    a cause of death did not substantively influence the pattern of mortality estimated. Substantially abbreviated and simplified verbal autopsy questionnaires might provide robust information about high-level mortality patterns.

  1. Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements

    Directory of Open Access Journals (Sweden)

    A. Canals

    2002-09-01

    Full Text Available Interplanetary scintillation measurements can yield estimates of a large number of solar wind parameters, including bulk flow speed, variation in bulk velocity along the observing path through the solar wind and random variation in transverse velocity. This last parameter is of particular interest, as it can indicate the flux of low-frequency Alfvén waves, and the dissipation of these waves has been proposed as an acceleration mechanism for the fast solar wind. Analysis of IPS data is, however, a significantly unresolved problem and a variety of a priori assumptions must be made in interpreting the data. Furthermore, the results may be affected by the physical structure of the radio source and by variations in the solar wind along the scintillation ray path. We have used observations of simple point-like radio sources made with EISCAT between 1994 and 1998 to obtain estimates of random transverse velocity in the fast solar wind. The results obtained with various a priori assumptions made in the analysis are compared, and we hope thereby to be able to provide some indication of the reliability of our estimates of random transverse velocity and the variation of this parameter with distance from the Sun.Key words. Interplanetary physics (MHD waves and turbulence; solar wind plasma; instruments and techniques

  2. Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements

    Directory of Open Access Journals (Sweden)

    A. Canals

    Full Text Available Interplanetary scintillation measurements can yield estimates of a large number of solar wind parameters, including bulk flow speed, variation in bulk velocity along the observing path through the solar wind and random variation in transverse velocity. This last parameter is of particular interest, as it can indicate the flux of low-frequency Alfvén waves, and the dissipation of these waves has been proposed as an acceleration mechanism for the fast solar wind. Analysis of IPS data is, however, a significantly unresolved problem and a variety of a priori assumptions must be made in interpreting the data. Furthermore, the results may be affected by the physical structure of the radio source and by variations in the solar wind along the scintillation ray path. We have used observations of simple point-like radio sources made with EISCAT between 1994 and 1998 to obtain estimates of random transverse velocity in the fast solar wind. The results obtained with various a priori assumptions made in the analysis are compared, and we hope thereby to be able to provide some indication of the reliability of our estimates of random transverse velocity and the variation of this parameter with distance from the Sun.

    Key words. Interplanetary physics (MHD waves and turbulence; solar wind plasma; instruments and techniques

  3. Using NDACC column measurements of carbonyl sulfide to estimate its sources and sinks

    Science.gov (United States)

    Wang, Yuting; Marshall, Julia; Palm, Mathias; Deutscher, Nicholas; Roedenbeck, Christian; Warneke, Thorsten; Notholt, Justus; Baker, Ian; Berry, Joe; Suntharalingam, Parvadha; Jones, Nicholas; Mahieu, Emmanuel; Lejeune, Bernard; Hannigan, James; Conway, Stephanie; Strong, Kimberly; Campbell, Elliott; Wolf, Adam; Kremser, Stefanie

    2016-04-01

    Carbonyl sulfide (OCS) is taken up by plants during photosynthesis through a similar pathway as carbon dioxide (CO2), but is not emitted by respiration, and thus holds great promise as an additional constraint on the carbon cycle. It might act as a sort of tracer of photosynthesis, a way to separate gross primary productivity (GPP) from the net ecosystem exchange (NEE) that is typically derived from flux modeling. However the estimates of OCS sources and sinks still have significant uncertainties, which make it difficult to use OCS as a photosynthetic tracer, and the existing long-term surface-based measurements are sparse. The NDACC-IRWG measures the absorption of OCS in the atmosphere, and provides a potential long-term database of OCS total/partial columns, which can be used to evaluate OCS fluxes. We have retrieved OCS columns from several NDACC sites around the globe, and compared them to model simulation with OCS land fluxes based on the simple biosphere model (SiB). The disagreement between the measurements and the forward simulations indicates that (1) the OCS land fluxes from SiB are too low in the northern boreal region; (2) the ocean fluxes need to be optimized. A statistical linear flux model describing OCS is developed in the TM3 inversion system, and is used to estimate the OCS fluxes. We performed flux inversions using only NOAA OCS surface measurements as an observational constraint and with both surface and NDACC OCS column measurements, and assessed the differences. The posterior uncertainties of the inverted OCS fluxes decreased with the inclusion of NDACC data comparing to those using surface data only, and could be further reduced if more NDACC sites were included.

  4. Activity assays and immunoassays for plasma Renin and prorenin: information provided and precautions necessary for accurate measurement

    DEFF Research Database (Denmark)

    Campbell, Duncan J; Nussberger, Juerg; Stowasser, Michael

    2009-01-01

    into focus the differences in information provided by activity assays and immunoassays for renin and prorenin measurement and has drawn attention to the need for precautions to ensure their accurate measurement. CONTENT: Renin activity assays and immunoassays provide related but different information...... provided by these assays and of the precautions necessary to ensure their accuracy....

  5. Inclusion of Topological Measurements into Analytic Estimates of Effective Permeability in Fractured Media

    Science.gov (United States)

    Sævik, P. N.; Nixon, C. W.

    2017-11-01

    We demonstrate how topology-based measures of connectivity can be used to improve analytical estimates of effective permeability in 2-D fracture networks, which is one of the key parameters necessary for fluid flow simulations at the reservoir scale. Existing methods in this field usually compute fracture connectivity using the average fracture length. This approach is valid for ideally shaped, randomly distributed fractures, but is not immediately applicable to natural fracture networks. In particular, natural networks tend to be more connected than randomly positioned fractures of comparable lengths, since natural fractures often terminate in each other. The proposed topological connectivity measure is based on the number of intersections and fracture terminations per sampling area, which for statistically stationary networks can be obtained directly from limited outcrop exposures. To evaluate the method, numerical permeability upscaling was performed on a large number of synthetic and natural fracture networks, with varying topology and geometry. The proposed method was seen to provide much more reliable permeability estimates than the length-based approach, across a wide range of fracture patterns. We summarize our results in a single, explicit formula for the effective permeability.

  6. Measures of phylogenetic differentiation provide robust and complementary insights into microbial communities.

    Science.gov (United States)

    Parks, Donovan H; Beiko, Robert G

    2013-01-01

    High-throughput sequencing techniques have made large-scale spatial and temporal surveys of microbial communities routine. Gaining insight into microbial diversity requires methods for effectively analyzing and visualizing these extensive data sets. Phylogenetic β-diversity measures address this challenge by allowing the relationship between large numbers of environmental samples to be explored using standard multivariate analysis techniques. Despite the success and widespread use of phylogenetic β-diversity measures, an extensive comparative analysis of these measures has not been performed. Here, we compare 39 measures of phylogenetic β diversity in order to establish the relative similarity of these measures along with key properties and performance characteristics. While many measures are highly correlated, those commonly used within microbial ecology were found to be distinct from those popular within classical ecology, and from the recently recommended Gower and Canberra measures. Many of the measures are surprisingly robust to different rootings of the gene tree, the choice of similarity threshold used to define operational taxonomic units, and the presence of outlying basal lineages. Measures differ considerably in their sensitivity to rare organisms, and the effectiveness of measures can vary substantially under alternative models of differentiation. Consequently, the depth of sequencing required to reveal underlying patterns of relationships between environmental samples depends on the selected measure. Our results demonstrate that using complementary measures of phylogenetic β diversity can further our understanding of how communities are phylogenetically differentiated. Open-source software implementing the phylogenetic β-diversity measures evaluated in this manuscript is available at http://kiwi.cs.dal.ca/Software/ExpressBetaDiversity.

  7. Measured and estimated glomerular filtration rate. Numerous methods of measurements (Part I

    Directory of Open Access Journals (Sweden)

    Jaime Pérez Loredo

    2017-04-01

    Equations applied for estimating GFR in population studies, should be reconsidered, given their imperfection and the difficulty for clinicians, who are not specialists on the subject, to interpret the results.

  8. Measurement and estimation of maximum skin dose to the patient for different interventional procedures

    International Nuclear Information System (INIS)

    Cheng Yuxi; Liu Lantao; Wei Kedao; Yu Peng; Yan Shulin; Li Tianchang

    2005-01-01

    Objective: To determine the dose distribution and maximum skin dose to the patient for four interventional procedures: coronary angiography (CA), hepatic angiography (HA), radiofrequency ablation (RF) and cerebral angiography (CAG), and to estimate the definitive effect of radiation on skin. Methods: Skin dose was measured using LiF: Mg, Cu, P TLD chips. A total of 9 measuring points were chosen on the back of the patient with two TLDs placed at each point, for CA, HA and RF interventional procedures, whereas two TLDs were placed on one point each at the postero-anterior (PA) and lateral side (LAT) respectively, during the CAG procedure. Results: The results revealed that the maximum skin dose to the patient was 1683.91 mGy for the HA procedure with a mean value of 607.29 mGy. The maximum skin dose at the PA point was 959.3 mGy for the CAG with a mean value of 418.79 mGy; While the maximum and the mean doses at the LAT point were 704 mGy and 191.52 mGy, respectively. For the RF procedure the maximum dose was 853.82 mGy and the mean was 219.67 mGy. For the CA procedure the maximum dose was 456.1 mGy and the mean was 227.63 mGy. Conclusion: All the measured dose values in this study are estimated ones which could not provide the accurate maximum value because it is difficult to measure using a great deal of TLDs. On the other hand, the small area of skin exposed to high dose could be missed as the distribution of the dose is successive. (authors)

  9. Viscosity estimation utilizing flow velocity field measurements in a rotating magnetized plasma

    International Nuclear Information System (INIS)

    Yoshimura, Shinji; Tanaka, Masayoshi Y.

    2008-01-01

    The importance of viscosity in determining plasma flow structures has been widely recognized. In laboratory plasmas, however, viscosity measurements have been seldom performed so far. In this paper we present and discuss an estimation method of effective plasma kinematic viscosity utilizing flow velocity field measurements. Imposing steady and axisymmetric conditions, we derive the expression for radial flow velocity from the azimuthal component of the ion fluid equation. The expression contains kinematic viscosity, vorticity of azimuthal rotation and its derivative, collision frequency, azimuthal flow velocity and ion cyclotron frequency. Therefore all quantities except the viscosity are given provided that the flow field can be measured. We applied this method to a rotating magnetized argon plasma produced by the Hyper-I device. The flow velocity field measurements were carried out using a directional Langmuir probe installed in a tilting motor drive unit. The inward ion flow in radial direction, which is not driven in collisionless inviscid plasmas, was clearly observed. As a result, we found the anomalous viscosity, the value of which is two orders of magnitude larger than the classical one. (author)

  10. Principal Curvature Measures Estimation and Application to 3D Face Recognition

    KAUST Repository

    Tang, Yinhang

    2017-04-06

    This paper presents an effective 3D face keypoint detection, description and matching framework based on three principle curvature measures. These measures give a unified definition of principle curvatures for both smooth and discrete surfaces. They can be reasonably computed based on the normal cycle theory and the geometric measure theory. The strong theoretical basis of these measures provides us a solid discrete estimation method on real 3D face scans represented as triangle meshes. Based on these estimated measures, the proposed method can automatically detect a set of sparse and discriminating 3D facial feature points. The local facial shape around each 3D feature point is comprehensively described by histograms of these principal curvature measures. To guarantee the pose invariance of these descriptors, three principle curvature vectors of these principle curvature measures are employed to assign the canonical directions. Similarity comparison between faces is accomplished by matching all these curvature-based local shape descriptors using the sparse representation-based reconstruction method. The proposed method was evaluated on three public databases, i.e. FRGC v2.0, Bosphorus, and Gavab. Experimental results demonstrated that the three principle curvature measures contain strong complementarity for 3D facial shape description, and their fusion can largely improve the recognition performance. Our approach achieves rank-one recognition rates of 99.6, 95.7, and 97.9% on the neutral subset, expression subset, and the whole FRGC v2.0 databases, respectively. This indicates that our method is robust to moderate facial expression variations. Moreover, it also achieves very competitive performance on the pose subset (over 98.6% except Yaw 90°) and the occlusion subset (98.4%) of the Bosphorus database. Even in the case of extreme pose variations like profiles, it also significantly outperforms the state-of-the-art approaches with a recognition rate of 57.1%. The

  11. Uncertainty in techno-economic estimates of cellulosic ethanol production due to experimental measurement uncertainty

    Directory of Open Access Journals (Sweden)

    Vicari Kristin J

    2012-04-01

    Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of

  12. BIM – New rules of measurement ontology for construction cost estimation

    Directory of Open Access Journals (Sweden)

    F.H. Abanda

    2017-04-01

    Full Text Available For generations, the process of cost estimation has been manual, time-consuming and error-prone. Emerging Building Information Modelling (BIM can exploit standard measurement methods to automate cost estimation process and improve inaccuracies. Structuring standard measurement methods in an ontologically and machine readable format for a BIM software can greatly facilitate the process of improving inaccuracies in cost estimation. This study explores the development of an ontology based on New Rules of Measurement (NRM for cost estimation during the tendering stages. The methodology adopted is methontology, one of the most widely used ontology engineering methodologies. To ensure the ontology is fit for purpose, cost estimation experts are employed to check the semantics, descriptive logic-based reasoners are used to syntactically check the ontology and a leading 4D BIM modelling software is used on a case study building to test/validate the proposed ontology.

  13. Surface Runoff Estimation Using SMOS Observations, Rain-gauge Measurements and Satellite Precipitation Estimations. Comparison with Model Predictions

    Science.gov (United States)

    Garcia Leal, Julio A.; Lopez-Baeza, Ernesto; Khodayar, Samiro; Estrela, Teodoro; Fidalgo, Arancha; Gabaldo, Onofre; Kuligowski, Robert; Herrera, Eddy

    Surface runoff is defined as the amount of water that originates from precipitation, does not infiltrates due to soil saturation and therefore circulates over the surface. A good estimation of runoff is useful for the design of draining systems, structures for flood control and soil utilisation. For runoff estimation there exist different methods such as (i) rational method, (ii) isochrone method, (iii) triangular hydrograph, (iv) non-dimensional SCS hydrograph, (v) Temez hydrograph, (vi) kinematic wave model, represented by the dynamics and kinematics equations for a uniforme precipitation regime, and (vii) SCS-CN (Soil Conservation Service Curve Number) model. This work presents a way of estimating precipitation runoff through the SCS-CN model, using SMOS (Soil Moisture and Ocean Salinity) mission soil moisture observations and rain-gauge measurements, as well as satellite precipitation estimations. The area of application is the Jucar River Basin Authority area where one of the objectives is to develop the SCS-CN model in a spatial way. The results were compared to simulations performed with the 7-km COSMO-CLM (COnsortium for Small-scale MOdelling, COSMO model in CLimate Mode) model. The use of SMOS soil moisture as input to the COSMO-CLM model will certainly improve model simulations.

  14. Spot urine sodium measurements do not accurately estimate dietary sodium intake in chronic kidney disease12

    Science.gov (United States)

    Dougher, Carly E; Rifkin, Dena E; Anderson, Cheryl AM; Smits, Gerard; Persky, Martha S; Block, Geoffrey A; Ix, Joachim H

    2016-01-01

    Background: Sodium intake influences blood pressure and proteinuria, yet the impact on long-term outcomes is uncertain in chronic kidney disease (CKD). Accurate assessment is essential for clinical and public policy recommendations, but few large-scale studies use 24-h urine collections. Recent studies that used spot urine sodium and associated estimating equations suggest that they may provide a suitable alternative, but their accuracy in patients with CKD is unknown. Objective: We compared the accuracy of 4 equations [the Nerbass, INTERSALT (International Cooperative Study on Salt, Other Factors, and Blood Pressure), Tanaka, and Kawasaki equations] that use spot urine sodium to estimate 24-h sodium excretion in patients with moderate to advanced CKD. Design: We evaluated the accuracy of spot urine sodium to predict mean 24-h urine sodium excretion over 9 mo in 129 participants with stage 3–4 CKD. Spot morning urine sodium was used in 4 estimating equations. Bias, precision, and accuracy were assessed and compared across each equation. Results: The mean age of the participants was 67 y, 52% were female, and the mean estimated glomerular filtration rate was 31 ± 9 mL · min–1 · 1.73 m–2. The mean ± SD number of 24-h urine collections was 3.5 ± 0.8/participant, and the mean 24-h sodium excretion was 168.2 ± 67.5 mmol/d. Although the Tanaka equation demonstrated the least bias (mean: −8.2 mmol/d), all 4 equations had poor precision and accuracy. The INTERSALT equation demonstrated the highest accuracy but derived an estimate only within 30% of mean measured sodium excretion in only 57% of observations. Bland-Altman plots revealed systematic bias with the Nerbass, INTERSALT, and Tanaka equations, underestimating sodium excretion when intake was high. Conclusion: These findings do not support the use of spot urine specimens to estimate dietary sodium intake in patients with CKD and research studies enriched with patients with CKD. The parent data for this

  15. The concept of estimation of elevator shaft control measurement results in the local 3D coordinate system

    Directory of Open Access Journals (Sweden)

    Filipiak-Kowszyk Daria

    2018-01-01

    Full Text Available Geodetic control measurements play an important part because they provide information about the current state of repair of the construction, which has a direct impact on the safety assessment of its exploitation. Authors in this paper have focused on control measurements of the elevator shaft. The article discusses the problem of determining the deviation of elevator shaft walls from the vertical plane in the local 3D coordinate system. It presents a concept of estimation of measurements results base on the parametric method with conditions on parameters. The simulated measurement results were used to verify the concept presented in the paper.

  16. Medium change based image estimation from application of inverse algorithms to coda wave measurements

    Science.gov (United States)

    Zhan, Hanyu; Jiang, Hanwan; Jiang, Ruinian

    2018-03-01

    Perturbations worked as extra scatters will cause coda waveform distortions; thus, coda wave with long propagation time and traveling path are sensitive to micro-defects in strongly heterogeneous media such as concretes. In this paper, we conduct varied external loads on a life-size concrete slab which contains multiple existing micro-cracks, and a couple of sources and receivers are installed to collect coda wave signals. The waveform decorrelation coefficients (DC) at different loads are calculated for all available source-receiver pair measurements. Then inversions of the DC results are applied to estimate the associated distribution density values in three-dimensional regions through kernel sensitivity model and least-square algorithms, which leads to the images indicating the micro-cracks positions. This work provides an efficiently non-destructive approach to detect internal defects and damages of large-size concrete structures.

  17. Using linear time-invariant system theory to estimate kinetic parameters directly from projection measurements

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1995-01-01

    It is common practice to estimate kinetic parameters from dynamically acquired tomographic data by first reconstructing a dynamic sequence of three-dimensional reconstructions and then fitting the parameters to time activity curves generated from the time-varying reconstructed images. However, in SPECT, the pharmaceutical distribution can change during the acquisition of a complete tomographic data set, which can bias the estimated kinetic parameters. It is hypothesized that more accurate estimates of the kinetic parameters can be obtained by fitting to the projection measurements instead of the reconstructed time sequence. Estimation from projections requires the knowledge of their relationship between the tissue regions of interest or voxels with particular kinetic parameters and the project measurements, which results in a complicated nonlinear estimation problem with a series of exponential factors with multiplicative coefficients. A technique is presented in this paper where the exponential decay parameters are estimated separately using linear time-invariant system theory. Once the exponential factors are known, the coefficients of the exponentials can be estimated using linear estimation techniques. Computer simulations demonstrate that estimation of the kinetic parameters directly from the projections is more accurate than the estimation from the reconstructed images

  18. The estimation of effective doses using measurement of several relevant physical parameters from radon exposures

    International Nuclear Information System (INIS)

    Ridzikova, A; Fronka, A.; Maly, B.; Moucka, L.

    2003-01-01

    In the present investigation, we will be study the dose relevant factors from continual monitoring in real homes into account getting more accurate estimation of 222 Rn the effective dose. The dose relevant parameters include the radon concentration, the equilibrium factor (f), the fraction (fp) of unattached radon decay products and real time occupancy people in home. The result of the measurement are the time courses of radon concentration that are based on estimation effective doses together with assessment of the real time occupancy people indoor. We found out by analysis that year effective dose is lower than effective dose estimated by ICRP recommendation from the integral measurement that included only average radon concentration. Our analysis of estimation effective doses using measurement of several physical parameters was made only in one case and for the better specification is important to measure in different real occupancy houses. (authors)

  19. A Model of Gravity Vector Measurement Noise for Estimating Accelerometer Bias in Gravity Disturbance Compensation

    Science.gov (United States)

    Cao, Juliang; Cai, Shaokun; Wu, Meiping; Lian, Junxiang

    2018-01-01

    Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method. PMID:29547552

  20. A Model of Gravity Vector Measurement Noise for Estimating Accelerometer Bias in Gravity Disturbance Compensation.

    Science.gov (United States)

    Tie, Junbo; Cao, Juliang; Chang, Lubing; Cai, Shaokun; Wu, Meiping; Lian, Junxiang

    2018-03-16

    Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method.

  1. REKF and RUKF for pico satellite attitude estimation in the presence of measurement faults

    Institute of Scientific and Technical Information of China (English)

    Halil Ersin Söken; Chingiz Hajiyev

    2014-01-01

    When a pico satel ite is under normal operational condi-tions, whether it is extended or unscented, a conventional Kalman filter gives sufficiently good estimation results. However, if the measurements are not reliable because of any kind of malfunc-tions in the estimation system, the Kalman filter gives inaccurate results and diverges by time. This study compares two different robust Kalman filtering algorithms, robust extended Kalman filter (REKF) and robust unscented Kalman filter (RUKF), for the case of measurement malfunctions. In both filters, by the use of de-fined variables named as the measurement noise scale factor, the faulty measurements are taken into the consideration with a smal weight, and the estimations are corrected without affecting the characteristic of the accurate ones. The proposed robust Kalman filters are applied for the attitude estimation process of a pico satel-lite, and the results are compared.

  2. Spacecraft Trajectory Estimation Using a Sampled-Data Extended Kalman Filter with Range-Only Measurements

    National Research Council Canada - National Science Library

    Erwin, R. S; Bernstein, Dennis S

    2005-01-01

    .... In this paper we use a sampled-data extended Kalman Filter to estimate the trajectory or a target satellite when only range measurements are available from a constellation or orbiting spacecraft...

  3. Arm-associated measurements as estimates of true height in black ...

    African Journals Online (AJOL)

    arm-associated measurements to true height included that of the World Health ... Conclusion: Findings indicate the need for gender and race-specific height estimation ..... New. York, NY: Springer; 2012. 12. Golshan M, Amra B, Hoghoghi MA.

  4. System and Method for Providing Vertical Profile Measurements of Atmospheric Gases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A system and method for using an air collection device to collect a continuous air sample as the device descends through the atmosphere are provided. The air...

  5. Block Volume Estimation from the Discontinuity Spacing Measurements of Mesozoic Limestone Quarries, Karaburun Peninsula, Turkey

    OpenAIRE

    Elci, Hakan; Turk, Necdet

    2014-01-01

    Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to ...

  6. The international food unit: a new measurement aid that can improve portion size estimation.

    Science.gov (United States)

    Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M

    2017-09-12

    Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.

  7. Estimation of thermal transmittance based on temperature measurements with the application of perturbation numbers

    Science.gov (United States)

    Nowoświat, Artur; Skrzypczyk, Jerzy; Krause, Paweł; Steidl, Tomasz; Winkler-Skalna, Agnieszka

    2018-05-01

    Fast estimation of thermal transmittance based on temperature measurements is uncertain, and the obtained results can be burdened with a large error. Nevertheless, such attempts should be undertaken merely due to the fact that a precise measurement by means of heat flux measurements is not always possible in field conditions (resentment of the residents during the measurements carried out inside their living quarters), and the calculation methods do not allow for the nonlinearity of thermal insulation, heat bridges or other fragments of building envelope of diversified thermal conductivity. The present paper offers the estimation of thermal transmittance and internal surface resistance with the use of temperature measurements (in particular with the use of thermovision). The proposed method has been verified through tests carried out on a laboratory test stand built in the open space, subjected to the influence of real meteorological conditions. The present elaboration involves the estimation of thermal transmittance by means of temperature measurements. Basing on the mentioned estimation, the authors present correction coefficients which have impact on the estimation accuracy. Furthermore, in the final part of the paper, various types of disturbance were allowed for using perturbation numbers, and the introduced by the authors "credibility area of thermal transmittance estimation" was determined.

  8. Crustal composition in the Hidaka Metamorphic Belt estimated from seismic velocity by laboratory measurements

    Science.gov (United States)

    Yamauchi, K.; Ishikawa, M.; Sato, H.; Iwasaki, T.; Toyoshima, T.

    2015-12-01

    To understand the dynamics of the lithosphere in subduction systems, the knowledge of rock composition is significant. However, rock composition of the overriding plate is still poorly understood. To estimate rock composition of the lithosphere, it is an effective method to compare the elastic wave velocities measured under the high pressure and temperature condition with the seismic velocities obtained by active source experiment and earthquake observation. Due to an arc-arc collision in central Hokkaido, middle to lower crust is exposed along the Hidaka Metamorphic Belt (HMB), providing exceptional opportunities to study crust composition of an island arc. Across the HMB, P-wave velocity model has been constructed by refraction/wide-angle reflection seismic profiling (Iwasaki et al., 2004). Furthermore, because of the interpretation of the crustal structure (Ito, 2000), we can follow a continuous pass from the surface to the middle-lower crust. We corrected representative rock samples from HMB and measured ultrasonic P-wave (Vp) and S-wave velocities (Vs) under the pressure up to 1.0 GPa in a temperature range from 25 to 400 °C. For example, the Vp values measured at 25 °C and 0.5 GPa are 5.88 km/s for the granite (74.29 wt.% SiO2), 6.02-6.34 km/s for the tonalites (66.31-68.92 wt.% SiO2), 6.34 km/s for the gneiss (64.69 wt.% SiO2), 6.41-7.05 km/s for the amphibolites (50.06-51.13 wt.% SiO2), and 7.42 km/s for the mafic granulite (50.94 wt.% SiO2). And, Vp of tonalites showed a correlation with SiO2 (wt.%). Comparing with the velocity profiles across the HMB (Iwasaki et al., 2004), we estimate that the lower to middle crust consists of amphibolite and tonalite, and the estimated acoustic impedance contrast between them suggests an existence of a clear reflective boundary, which accords well to the obtained seismic reflection profile (Iwasaki et al., 2014). And, we can obtain the same tendency from comparing measured Vp/Vs ratio and Vp/Vs ratio structure model

  9. An Implementation of Error Minimization Position Estimate in Wireless Inertial Measurement Unit using Modification ZUPT

    Directory of Open Access Journals (Sweden)

    Adytia Darmawan

    2016-12-01

    Full Text Available Position estimation using WIMU (Wireless Inertial Measurement Unit is one of emerging technology in the field of indoor positioning systems. WIMU can detect movement and does not depend on GPS signals. The position is then estimated using a modified ZUPT (Zero Velocity Update method that was using Filter Magnitude Acceleration (FMA, Variance Magnitude Acceleration (VMA and Angular Rate (AR estimation. Performance of this method was justified on a six-legged robot navigation system. Experimental result shows that the combination of VMA-AR gives the best position estimation.

  10. Estimation of uncertainty of measurements of 3D mechanisms after kinematic calibration

    International Nuclear Information System (INIS)

    Takamasu, K; Sato, O; Shimojima, K; Takahashi, S; Furutani, R

    2005-01-01

    Calibration methods for 3D mechanisms are necessary to use the mechanisms as coordinate measuring machines. The calibration method of coordinate measuring machine using artifacts, the artifact calibration method, is proposed in taking account of traceability of the mechanism. There are kinematic parameters and form-deviation parameters in geometric parameters for describing the forward kinematic of the mechanism. In this article, the estimation methods of uncertainties using the calibrated coordinate measuring machine after the calibration are formulated. Firstly, the calculation method which takes out the values of kinematic parameters using least squares method is formulated. Secondly, the estimation value of uncertainty of the measuring machine is calculated using the error propagation method

  11. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  12. Estimation of release of tritium from measurements of air concentrations in reactor building of PHWR

    International Nuclear Information System (INIS)

    Purohit, R.G.; Sarkar, P.K.

    2010-01-01

    In this paper an attempt has been made to estimate the releases from measured air concentrations of tritium at various locations in Reactor Building (RB). Design data of Kaiga Generating Station and sample measurements of tritium concentrations at various locations in RB and discharges for a period of fortnight were used. A comparison has also been made with actual measurements. It has been observed that there is good matching in estimated and actual measurements of tritium release on some days while on some days there is high difference

  13. Field data provide estimates of effective permeability, fracture spacing, well drainage area and incremental production in gas shales

    KAUST Repository

    Eftekhari, Behzad

    2018-05-23

    About half of US natural gas comes from gas shales. It is valuable to study field production well by well. We present a field data-driven solution for long-term shale gas production from a horizontal, hydrofractured well far from other wells and reservoir boundaries. Our approach is a hybrid between an unstructured big-data approach and physics-based models. We extend a previous two-parameter scaling theory of shale gas production by adding a third parameter that incorporates gas inflow from the external unstimulated reservoir. This allows us to estimate for the first time the effective permeability of the unstimulated shale and the spacing of fractures in the stimulated region. From an analysis of wells in the Barnett shale, we find that on average stimulation fractures are spaced every 20 m, and the effective permeability of the unstimulated region is 100 nanodarcy. We estimate that over 30 years on production the Barnett wells will produce on average about 20% more gas because of inflow from the outside of the stimulated volume. There is a clear tradeoff between production rate and ultimate recovery in shale gas development. In particular, our work has strong implications for well spacing in infill drilling programs.

  14. Headphone-To-Ear Transfer Function Estimation Using Measured Acoustic Parameters

    Directory of Open Access Journals (Sweden)

    Jinlin Liu

    2018-06-01

    Full Text Available This paper proposes to use an optimal five-microphone array method to measure the headphone acoustic reflectance and equivalent sound sources needed in the estimation of headphone-to-ear transfer functions (HpTFs. The performance of this method is theoretically analyzed and experimentally investigated. With the measured acoustic parameters HpTFs for different headphones and ear canal area functions are estimated based on a computational acoustic model. The estimation results show that HpTFs vary considerably with headphones and ear canals, which suggests that individualized compensations for HpTFs are necessary for headphones to reproduce desired sounds for different listeners.

  15. Connections of geometric measure of entanglement of pure symmetric states to quantum state estimation

    International Nuclear Information System (INIS)

    Chen Lin; Zhu Huangjun; Wei, Tzu-Chieh

    2011-01-01

    We study the geometric measure of entanglement (GM) of pure symmetric states related to rank 1 positive-operator-valued measures (POVMs) and establish a general connection with quantum state estimation theory, especially the maximum likelihood principle. Based on this connection, we provide a method for computing the GM of these states and demonstrate its additivity property under certain conditions. In particular, we prove the additivity of the GM of pure symmetric multiqubit states whose Majorana points under Majorana representation are distributed within a half sphere, including all pure symmetric three-qubit states. We then introduce a family of symmetric states that are generated from mutually unbiased bases and derive an analytical formula for their GM. These states include Dicke states as special cases, which have already been realized in experiments. We also derive the GM of symmetric states generated from symmetric informationally complete POVMs (SIC POVMs) and use it to characterize all inequivalent SIC POVMs in three-dimensional Hilbert space that are covariant with respect to the Heisenberg-Weyl group. Finally, we describe an experimental scheme for creating the symmetric multiqubit states studied in this article and a possible scheme for measuring the permanence of the related Gram matrix.

  16. Atomic bomb made in Germany. Geo-radar measurements provide new insights

    International Nuclear Information System (INIS)

    Hauk, Rolf-Guenter; Focken, Christel

    2017-01-01

    The authors describe new geo radar measurements In Jonastal and discuss the results in relation to rumors on German efforts to build an atomic bond during the Second World War. The book includes available documentation on German and American research and technological activities (Manhattan project).

  17. Field experiment provides ground truth for surface nuclear magnetic resonance measurement

    Science.gov (United States)

    Knight, R.; Grunewald, E.; Irons, T.; Dlubac, K.; Song, Y.; Bachman, H.N.; Grau, B.; Walsh, D.; Abraham, J.D.; Cannia, J.

    2012-01-01

    The need for sustainable management of fresh water resources is one of the great challenges of the 21st century. Since most of the planet's liquid fresh water exists as groundwater, it is essential to develop non-invasive geophysical techniques to characterize groundwater aquifers. A field experiment was conducted in the High Plains Aquifer, central United States, to explore the mechanisms governing the non-invasive Surface NMR (SNMR) technology. We acquired both SNMR data and logging NMR data at a field site, along with lithology information from drill cuttings. This allowed us to directly compare the NMR relaxation parameter measured during logging,T2, to the relaxation parameter T2* measured using the SNMR method. The latter can be affected by inhomogeneity in the magnetic field, thus obscuring the link between the NMR relaxation parameter and the hydraulic conductivity of the geologic material. When the logging T2data were transformed to pseudo-T2* data, by accounting for inhomogeneity in the magnetic field and instrument dead time, we found good agreement with T2* obtained from the SNMR measurement. These results, combined with the additional information about lithology at the site, allowed us to delineate the physical mechanisms governing the SNMR measurement. Such understanding is a critical step in developing SNMR as a reliable geophysical method for the assessment of groundwater resources.

  18. Measuring Ucrit and endurance: equipment choice influences estimates of fish swimming performance.

    Science.gov (United States)

    Kern, P; Cramp, R L; Gordos, M A; Watson, J R; Franklin, C E

    2018-01-01

    This study compared the critical swimming speed (U crit ) and endurance performance of three Australian freshwater fish species in different swim-test apparatus. Estimates of U crit measured in a large recirculating flume were greater for all species compared with estimates from a smaller model of the same recirculating flume. Large differences were also observed for estimates of endurance swimming performance between these recirculating flumes and a free-surface swim tunnel. Differences in estimates of performance may be attributable to variation in flow conditions within different types of swim chambers. Variation in estimates of swimming performance between different types of flumes complicates the application of laboratory-based measures to the design of fish passage infrastructure. © 2017 The Fisheries Society of the British Isles.

  19. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...

  20. Estimating the Wind Resource in Uttarakhand: Comparison of Dynamic Downscaling with Doppler Lidar Wind Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, J. K. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Pukayastha, A. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Martin, C. [Univ. of Colorado, Boulder, CO (United States); Newsom, R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-03-01

    Previous estimates of the wind resources in Uttarakhand, India, suggest minimal wind resources in this region. To explore whether or not the complex terrain in fact provides localized regions of wind resource, the authors of this study employed a dynamic down scaling method with the Weather Research and Forecasting model, providing detailed estimates of winds at approximately 1 km resolution in the finest nested simulation.

  1. An Adaptive Low-Cost INS/GNSS Tightly-Coupled Integration Architecture Based on Redundant Measurement Noise Covariance Estimation.

    Science.gov (United States)

    Li, Zheng; Zhang, Hai; Zhou, Qifan; Che, Huan

    2017-09-05

    The main objective of the introduced study is to design an adaptive Inertial Navigation System/Global Navigation Satellite System (INS/GNSS) tightly-coupled integration system that can provide more reliable navigation solutions by making full use of an adaptive Kalman filter (AKF) and satellite selection algorithm. To achieve this goal, we develop a novel redundant measurement noise covariance estimation (RMNCE) theorem, which adaptively estimates measurement noise properties by analyzing the difference sequences of system measurements. The proposed RMNCE approach is then applied to design both a modified weighted satellite selection algorithm and a type of adaptive unscented Kalman filter (UKF) to improve the performance of the tightly-coupled integration system. In addition, an adaptive measurement noise covariance expanding algorithm is developed to mitigate outliers when facing heavy multipath and other harsh situations. Both semi-physical simulation and field experiments were conducted to evaluate the performance of the proposed architecture and were compared with state-of-the-art algorithms. The results validate that the RMNCE provides a significant improvement in the measurement noise covariance estimation and the proposed architecture can improve the accuracy and reliability of the INS/GNSS tightly-coupled systems. The proposed architecture can effectively limit positioning errors under conditions of poor GNSS measurement quality and outperforms all the compared schemes.

  2. Estimation of uncertainty in tracer gas measurement of air change rates.

    Science.gov (United States)

    Iizuka, Atsushi; Okuizumi, Yumiko; Yanagisawa, Yukio

    2010-12-01

    Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of air change rate can be avoided. The proposed estimation method will be useful in practical ventilation measurements.

  3. A Scale Elasticity Measure for Directional Distance Function and its Dual: Theory and DEA Estimation

    OpenAIRE

    Valentin Zelenyuk

    2012-01-01

    In this paper we focus on scale elasticity measure based on directional distance function for multi-output-multi-input technologies, explore its fundamental properties and show its equivalence with the input oriented and output oriented scale elasticity measures. We also establish duality relationship between the scale elasticity measure based on the directional distance function with scale elasticity measure based on the profit function. Finally, we discuss the estimation issues of the scale...

  4. Interdependence between measures of extent and severity of myocardial perfusion defects provided by automatic quantification programs

    DEFF Research Database (Denmark)

    El-Ali, Henrik Hussein; Palmer, John; Carlsson, Marcus

    2005-01-01

    To evaluate the accuracy of the values of lesion extent and severity provided by the two automatic quantification programs AutoQUANT and 4D-MSPECT using myocardial perfusion images generated by Monte Carlo simulation of a digital phantom. The combination between a realistic computer phantom and a...

  5. Patient-provider concordance with behavioral change goals drives measures of motivational interviewing consistency.

    Science.gov (United States)

    Laws, Michael Barton; Rose, Gary S; Beach, Mary Catherine; Lee, Yoojin; Rogers, William S; Velasco, Alyssa Bianca; Wilson, Ira B

    2015-06-01

    Motivational Interviewing (MI) consistent talk by a counselor is thought to produce "change talk" in clients. However, it is possible that client resistance to behavior change can produce MI inconsistent counselor behavior. We applied a coding scheme which identifies all of the behavioral counseling about a given issue during a visit ("episodes"), assesses patient concordance with the behavioral goal, and labels providers' counseling style as facilitative or directive, to a corpus of routine outpatient visits by people with HIV. Using a different data set of comparable encounters, we applied the concepts of episode and concordance, and coded using the Motivational Interviewing Treatment Integrity system. Patient concordance/discordance was not observed to change during any episode. Provider directiveness was strongly associated with patient discordance in the first study, and MI inconsistency was strongly associated with discordance in the second. Observations that MI-consistent behavior by medical providers is associated with patient change talk or outcomes should be evaluated cautiously, as patient resistance may provoke MI-inconsistency. Counseling episodes in routine medical visits are typically too brief for client talk to evolve toward change. Providers with limited training may have particular difficulty maintaining MI consistency with resistant clients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Measuring Healthcare Providers' Performances Within Managed Competition using Multidimensional Quality and Cost Indicators

    NARCIS (Netherlands)

    Portrait, F.R.M.; van den Berg, B.

    2015-01-01

    Background and objectives: The Dutch healthcare system is in transition towards managed competition. In theory, a system of managed competition involves incentives for quality and efficiency of provided care. This is mainly because health insurers contract on behalf of their clients with healthcare

  7. Stroke Volume estimation using aortic pressure measurements and aortic cross sectional area: Proof of concept.

    Science.gov (United States)

    Kamoi, S; Pretty, C G; Chiew, Y S; Pironet, A; Davidson, S; Desaive, T; Shaw, G M; Chase, J G

    2015-08-01

    Accurate Stroke Volume (SV) monitoring is essential for patient with cardiovascular dysfunction patients. However, direct SV measurements are not clinically feasible due to the highly invasive nature of measurement devices. Current devices for indirect monitoring of SV are shown to be inaccurate during sudden hemodynamic changes. This paper presents a novel SV estimation using readily available aortic pressure measurements and aortic cross sectional area, using data from a porcine experiment where medical interventions such as fluid replacement, dobutamine infusions, and recruitment maneuvers induced SV changes in a pig with circulatory shock. Measurement of left ventricular volume, proximal aortic pressure, and descending aortic pressure waveforms were made simultaneously during the experiment. From measured data, proximal aortic pressure was separated into reservoir and excess pressures. Beat-to-beat aortic characteristic impedance values were calculated using both aortic pressure measurements and an estimate of the aortic cross sectional area. SV was estimated using the calculated aortic characteristic impedance and excess component of the proximal aorta. The median difference between directly measured SV and estimated SV was -1.4ml with 95% limit of agreement +/- 6.6ml. This method demonstrates that SV can be accurately captured beat-to-beat during sudden changes in hemodynamic state. This novel SV estimation could enable improved cardiac and circulatory treatment in the critical care environment by titrating treatment to the effect on SV.

  8. Sourceless formation evaluation. An LWD solution providing density and neutron measurements without the use of radioisotopes

    Energy Technology Data Exchange (ETDEWEB)

    Griffiths, R.; Reichel, N. [Schlumberger, Sungai Buloh (Malaysia)

    2013-08-01

    For many years the industry has been searching for a way to eliminate the logistical difficulties and risk associated with deployment of radioisotopes for formation evaluation. The traditional gamma-gamma density (GGD) measurement uses the scattering of 662-keV gamma rays from a 137Cs radioisotopic source, with a 30.17-year half-life, to determine formation density. The traditional neutron measurement uses an Am-Be source emitting neutrons with an energy around 4 MeV, with a half-life of 432 years. Both these radioisotopic sources pose health, security, and environmental risks. Pulsed-neutron generators have been used in the industry for several decades in wireline tools and more recently in logging-while-drilling tools. These generators produce 14-MeV neutrons, many of which interact with the nuclei in the formation. Elastic collisions allow a neutron porosity measurement to be derived, which has been available to the industry since 2005. Inelastic interactions are typically followed by the emission of a variety of high-energy gamma rays. Similar to the case of the GGD measurement, the transport and attenuation of these gamma rays is a strong function of the formation density. However, the gamma-ray source is now distributed over a volume within the formation, where gamma rays have been induced by neutron interactions and the source can no longer be considered to be a point as in the case of a radioisotopic source. In addition, the extent of the induced source region depends on the transport of the fast neutrons from the source to the point of gamma-ray production. Even though the physics is more complex, it is possible to measure the formation density if the fast neutron transport is taken into account when deriving the density answer. This paper briefly reviews the physics underlying the sourceless neutron porosity and recently introduced neutron-gamma density (SNGD) measurement, demonstrates how they can be used in traditional workflows and illustrates their

  9. Estimation and improvement of the RF government plan for providing the sustainable social-economic development of Russia in 2016

    Directory of Open Access Journals (Sweden)

    Dmitriy V. Manushin

    2016-09-01

    Full Text Available Objective to assess the anticrisis plan of the RF Government of January 27 2015 to project them onto the anticrisis plan of the RF Government dated March 1 2016 to identify its problems and suggest measures for their solution. Methods abstractlogical. Results the modern Russian economy is facing severe challenges posed by the crisis in the economy Western sanctions and foreign policy of the country. The government aware of these problems for the second year in a row adopts and implements a program of anticrisis measures. Analysis of scientific literature reports of the government and supervisory bodies allowed to formulate conclusions on the low efficiency of the anticrisis plan of 27 January 2015. Most of the indicators have not been achieved in many areas the objectives and their implementation have been haphazard with immeasurable results and lack of accountability for results. Analysis of the structure of anticrisis measures of the Russian government for 2016 demonstrated the persistence of old problems and the emergence of new ones. The number of indicators increased while simultaneously funding reduced as well as the number of specialists engaged in substantive anticrisis measures. Inefficient structure of the anticrisis measures is identified where priority is given to inappropriate support of the regions and the domestic auto industry to the detriment of the social component as well as other problems. As a result the measures are proposed to address the identified problems in the anticrisis plans of the Russian Government dated 27.01.2015 and 01.03.2016. Scientific novelty the following basic steps are formulated to address the identified challenges to modernize the work of the crisis staff to specify the anticrisis measures developing a detailed mechanism for their implementation and indicators for assessing the effectiveness to increase the motivation of civil servants to increase the minimum wage to the subsistence minimum to grant

  10. Derelict fishing line provides a useful proxy for estimating levels of non-compliance with no-take marine reserves.

    Directory of Open Access Journals (Sweden)

    David H Williamson

    Full Text Available No-take marine reserves (NTMRs are increasingly being established to conserve or restore biodiversity and to enhance the sustainability of fisheries. Although effectively designed and protected NTMR networks can yield conservation and fishery benefits, reserve effects often fail to manifest in systems where there are high levels of non-compliance by fishers (poaching. Obtaining reliable estimates of NTMR non-compliance can be expensive and logistically challenging, particularly in areas with limited or non-existent resources for conducting surveillance and enforcement. Here we assess the utility of density estimates and re-accumulation rates of derelict (lost and abandoned fishing line as a proxy for fishing effort and NTMR non-compliance on fringing coral reefs in three island groups of the Great Barrier Reef Marine Park (GBRMP, Australia. Densities of derelict fishing line were consistently lower on reefs within old (>20 year NTMRs than on non-NTMR reefs (significantly in the Palm and Whitsunday Islands, whereas line densities did not differ significantly between reefs in new NTMRs (5 years of protection and non-NTMR reefs. A manipulative experiment in which derelict fishing lines were removed from a subset of the monitoring sites demonstrated that lines re-accumulated on NTMR reefs at approximately one third (32.4% of the rate observed on non-NTMR reefs over a thirty-two month period. Although these inshore NTMRs have long been considered some of the best protected within the GBRMP, evidence presented here suggests that the level of non-compliance with NTMR regulations is higher than previously assumed.

  11. Brittleness estimation from seismic measurements in unconventional reservoirs: Application to the Barnett shale

    Science.gov (United States)

    Perez Altimar, Roderick

    Brittleness is a key characteristic for effective reservoir stimulation and is mainly controlled by mineralogy in unconventional reservoirs. Unfortunately, there is no universally accepted means of predicting brittleness from measures made in wells or from surface seismic data. Brittleness indices (BI) are based on mineralogy, while brittleness average estimations are based on Young's modulus and Poisson's ratio. I evaluate two of the more popular brittleness estimation techniques and apply them to a Barnett Shale seismic survey in order to estimate its geomechanical properties. Using specialized logging tools such as elemental capture tool, density, and P- and S wave sonic logs calibrated to previous core descriptions and laboratory measurements, I create a survey-specific BI template in Young's modulus versus Poisson's ratio or alternatively lambdarho versus murho space. I use this template to predict BI from elastic parameters computed from surface seismic data, providing a continuous estimate of BI estimate in the Barnett Shale survey. Extracting lambdarho-murho values from microseismic event locations, I compute brittleness index from the template and find that most microsemic events occur in the more brittle part of the reservoir. My template is validated through a suite of microseismic experiments that shows most events occurring in brittle zones, fewer events in the ductile shale, and fewer events still in the limestone fracture barriers. Estimated ultimate recovery (EUR) is an estimate of the expected total production of oil and/or gas for the economic life of a well and is widely used in the evaluation of resource play reserves. In the literature it is possible to find several approaches for forecasting purposes and economic analyses. However, the extension to newer infill wells is somewhat challenging because production forecasts in unconventional reservoirs are a function of both completion effectiveness and reservoir quality. For shale gas reservoirs

  12. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    Science.gov (United States)

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. A calibration facility to provide traceable calibration to upper air humidity measuring sensors

    Science.gov (United States)

    Cuccaro, Rugiada; Rosso, Lucia; Smorgon, Denis; Beltramino, Giulio; Fernicola, Vito

    2017-04-01

    Accurate knowledge and high quality measurement of the upper air humidity and of its profile in atmosphere is essential in many areas of the atmospheric research, for example in weather forecasting, environmental pollution studies and research in meteorology and climatology. Moving from the troposphere to the stratosphere, the water vapour amount varies between some percent to few part per million. For this reason, through the years, several methods and instruments have been developed for the measurement of the humidity in atmosphere. Among the instruments used for atmospheric sounding, radiosondes, airborne and balloon-borne chilled mirror hygrometer (CMH) and tunable diode laser absorption spectrometers (TDLAS) play a key role. To avoid the presence of unknown biases and systematic errors and to obtain accurate and reliable humidity measurements, these instruments need a SI-traceable calibration, preferably carried out in conditions similar to those expected in the field. To satisfy such a need, a new calibration facility has been developed at INRIM. The facility is based on a thermodynamic-based frost-point generator designed to achieve a complete saturation of the carrier gas with a single passage through an isothermal saturator. The humidity generator covers the frost point temperature range between -98 °C and -20 °C and is able to work at any controlled pressure between 200 hPa and 1000 hPa (corresponding to a barometric altitude between ground level and approximately 12000 m). The paper reports the work carried out to test the generator performances, discusses the results and presents the evaluation of the measurement uncertainty. The present work was carried out within the European Joint Research Project "MeteoMet 2 - Metrology for Essential Climate Variables" co-funded by the European Metrology Research Programme (EMRP). The EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union.

  14. Estimating drizzle drop size and precipitation rate using two-colour lidar measurements

    Directory of Open Access Journals (Sweden)

    C. D. Westbrook

    2010-06-01

    Full Text Available A method to estimate the size and liquid water content of drizzle drops using lidar measurements at two wavelengths is described. The method exploits the differential absorption of infrared light by liquid water at 905 nm and 1.5 μm, which leads to a different backscatter cross section for water drops larger than ≈50 μm. The ratio of backscatter measured from drizzle samples below cloud base at these two wavelengths (the colour ratio provides a measure of the median volume drop diameter D0. This is a strong effect: for D0=200 μm, a colour ratio of ≈6 dB is predicted. Once D0 is known, the measured backscatter at 905 nm can be used to calculate the liquid water content (LWC and other moments of the drizzle drop distribution.

    The method is applied to observations of drizzle falling from stratocumulus and stratus clouds. High resolution (32 s, 36 m profiles of D0, LWC and precipitation rate R are derived. The main sources of error in the technique are the need to assume a value for the dispersion parameter μ in the drop size spectrum (leading to at most a 35% error in R and the influence of aerosol returns on the retrieval (≈10% error in R for the cases considered here. Radar reflectivities are also computed from the lidar data, and compared to independent measurements from a colocated cloud radar, offering independent validation of the derived drop size distributions.

  15. Measurement of MMP-9 and -12 degraded elastin (ELM) provides unique information on lung tissue degradation

    DEFF Research Database (Denmark)

    Skjøt-Arkil, Helene; Clausen, Rikke E; Nguyen, Quoc Hai Trieu

    2012-01-01

    Elastin is an essential component of selected connective tissues that provides a unique physiological elasticity. Elastin may be considered a signature protein of lungs where matrix metalloprotease (MMP) -9-and -12, may be considered the signature proteases of the macrophages, which in part...... are responsible for tissue damage during disease progression. Thus, we hypothesized that a MMP-9/-12 generated fragment of elastin may be a relevant biochemical maker for lung diseases....

  16. Time-dependent inversion estimates of global biomass-burning CO emissions using Measurement of Pollution in the Troposphere (MOPITT) measurements

    Science.gov (United States)

    Arellano, Avelino F.; Kasibhatla, Prasad S.; Giglio, Louis; van der Werf, Guido R.; Randerson, James T.; Collatz, G. James

    2006-05-01

    We present an inverse-modeling analysis of CO emissions using column CO retrievals from the Measurement of Pollution in the Troposphere (MOPITT) instrument and a global chemical transport model (GEOS-CHEM). We first focus on the information content of MOPITT CO column retrievals in terms of constraining CO emissions associated with biomass burning and fossil fuel/biofuel use. Our analysis shows that seasonal variation of biomass-burning CO emissions in Africa, South America, and Southeast Asia can be characterized using monthly mean MOPITT CO columns. For the fossil fuel/biofuel source category the derived monthly mean emission estimates are noisy even when the error statistics are accurately known, precluding a characterization of seasonal variations of regional CO emissions for this source category. The derived estimate of CO emissions from biomass burning in southern Africa during the June-July 2000 period is significantly higher than the prior estimate (prior, 34 Tg; posterior, 13 Tg). We also estimate that emissions are higher relative to the prior estimate in northern Africa during December 2000 to January 2001 and lower relative to the prior estimate in Central America and Oceania/Indonesia during April-May and September-October 2000, respectively. While these adjustments provide better agreement of the model with MOPITT CO column fields and with independent measurements of surface CO from National Oceanic and Atmospheric Administration Climate Monitoring and Diagnostics Laboratory at background sites in the Northern Hemisphere, some systematic differences between modeled and measured CO fields persist, including model overestimation of background surface CO in the Southern Hemisphere. Characterizing and accounting for underlying biases in the measurement model system are needed to improve the robustness of the top-down estimates.

  17. Speech graphs provide a quantitative measure of thought disorder in psychosis.

    Science.gov (United States)

    Mota, Natalia B; Vasconcelos, Nivaldo A P; Lemos, Nathalia; Pieretti, Ana C; Kinouchi, Osame; Cecchi, Guillermo A; Copelli, Mauro; Ribeiro, Sidarta

    2012-01-01

    Psychosis has various causes, including mania and schizophrenia. Since the differential diagnosis of psychosis is exclusively based on subjective assessments of oral interviews with patients, an objective quantification of the speech disturbances that characterize mania and schizophrenia is in order. In principle, such quantification could be achieved by the analysis of speech graphs. A graph represents a network with nodes connected by edges; in speech graphs, nodes correspond to words and edges correspond to semantic and grammatical relationships. To quantify speech differences related to psychosis, interviews with schizophrenics, manics and normal subjects were recorded and represented as graphs. Manics scored significantly higher than schizophrenics in ten graph measures. Psychopathological symptoms such as logorrhea, poor speech, and flight of thoughts were grasped by the analysis even when verbosity differences were discounted. Binary classifiers based on speech graph measures sorted schizophrenics from manics with up to 93.8% of sensitivity and 93.7% of specificity. In contrast, sorting based on the scores of two standard psychiatric scales (BPRS and PANSS) reached only 62.5% of sensitivity and specificity. The results demonstrate that alterations of the thought process manifested in the speech of psychotic patients can be objectively measured using graph-theoretical tools, developed to capture specific features of the normal and dysfunctional flow of thought, such as divergence and recurrence. The quantitative analysis of speech graphs is not redundant with standard psychometric scales but rather complementary, as it yields a very accurate sorting of schizophrenics and manics. Overall, the results point to automated psychiatric diagnosis based not on what is said, but on how it is said.

  18. Speech graphs provide a quantitative measure of thought disorder in psychosis.

    Directory of Open Access Journals (Sweden)

    Natalia B Mota

    Full Text Available BACKGROUND: Psychosis has various causes, including mania and schizophrenia. Since the differential diagnosis of psychosis is exclusively based on subjective assessments of oral interviews with patients, an objective quantification of the speech disturbances that characterize mania and schizophrenia is in order. In principle, such quantification could be achieved by the analysis of speech graphs. A graph represents a network with nodes connected by edges; in speech graphs, nodes correspond to words and edges correspond to semantic and grammatical relationships. METHODOLOGY/PRINCIPAL FINDINGS: To quantify speech differences related to psychosis, interviews with schizophrenics, manics and normal subjects were recorded and represented as graphs. Manics scored significantly higher than schizophrenics in ten graph measures. Psychopathological symptoms such as logorrhea, poor speech, and flight of thoughts were grasped by the analysis even when verbosity differences were discounted. Binary classifiers based on speech graph measures sorted schizophrenics from manics with up to 93.8% of sensitivity and 93.7% of specificity. In contrast, sorting based on the scores of two standard psychiatric scales (BPRS and PANSS reached only 62.5% of sensitivity and specificity. CONCLUSIONS/SIGNIFICANCE: The results demonstrate that alterations of the thought process manifested in the speech of psychotic patients can be objectively measured using graph-theoretical tools, developed to capture specific features of the normal and dysfunctional flow of thought, such as divergence and recurrence. The quantitative analysis of speech graphs is not redundant with standard psychometric scales but rather complementary, as it yields a very accurate sorting of schizophrenics and manics. Overall, the results point to automated psychiatric diagnosis based not on what is said, but on how it is said.

  19. Does Environmental Enrichment Reduce Stress? An Integrated Measure of Corticosterone from Feathers Provides a Novel Perspective

    Science.gov (United States)

    Fairhurst, Graham D.; Frey, Matthew D.; Reichert, James F.; Szelest, Izabela; Kelly, Debbie M.; Bortolotti, Gary R.

    2011-01-01

    Enrichment is widely used as tool for managing fearfulness, undesirable behaviors, and stress in captive animals, and for studying exploration and personality. Inconsistencies in previous studies of physiological and behavioral responses to enrichment led us to hypothesize that enrichment and its removal are stressful environmental changes to which the hormone corticosterone and fearfulness, activity, and exploration behaviors ought to be sensitive. We conducted two experiments with a captive population of wild-caught Clark's nutcrackers (Nucifraga columbiana) to assess responses to short- (10-d) and long-term (3-mo) enrichment, their removal, and the influence of novelty, within the same animal. Variation in an integrated measure of corticosterone from feathers, combined with video recordings of behaviors, suggests that how individuals perceive enrichment and its removal depends on the duration of exposure. Short- and long-term enrichment elicited different physiological responses, with the former acting as a stressor and birds exhibiting acclimation to the latter. Non-novel enrichment evoked the strongest corticosterone responses of all the treatments, suggesting that the second exposure to the same objects acted as a physiological cue, and that acclimation was overridden by negative past experience. Birds showed weak behavioral responses that were not related to corticosterone. By demonstrating that an integrated measure of glucocorticoid physiology varies significantly with changes to enrichment in the absence of agonistic interactions, our study sheds light on potential mechanisms driving physiological and behavioral responses to environmental change. PMID:21412426

  20. A new sentence generator providing material for maximum reading speed measurement.

    Science.gov (United States)

    Perrin, Jean-Luc; Paillé, Damien; Baccino, Thierry

    2015-12-01

    A new method is proposed to generate text material for assessing maximum reading speed of adult readers. The described procedure allows one to generate a vast number of equivalent short sentences. These sentences can be displayed for different durations in order to determine the reader's maximum speed using a psychophysical threshold algorithm. Each sentence is built so that it is either true or false according to common knowledge. The actual reading is verified by asking the reader to determine the truth value of each sentence. We based our design on the generator described by Crossland et al. and upgraded it. The new generator handles concepts distributed in an ontology, which allows an easy determination of the sentences' truth value and control of lexical and psycholinguistic parameters. In this way many equivalent sentence can be generated and displayed to perform the measurement. Maximum reading speed scores obtained with pseudo-randomly chosen sentences from the generator were strongly correlated with maximum reading speed scores obtained with traditional MNREAD sentences (r = .836). Furthermore, the large number of sentences that can be generated makes it possible to perform repeated measurements, since the possibility of a reader learning individual sentences is eliminated. Researchers interested in within-reader performance variability could use the proposed method for this purpose.

  1. Measuring the benefits of using market based approaches to provide water and sanitation in humanitarian contexts.

    Science.gov (United States)

    Martin-Simpson, S; Parkinson, J; Katsou, E

    2018-06-15

    The use of cash transfers and market based programming (CT/MBP) to increase the efficiency and effectiveness of emergency responses is gaining prominence in the humanitarian sector. However, there is a lack of existing indicators and methodologies to monitor activities designed to strengthen water and sanitation (WaSH) markets. Gender and vulnerability markers to measure the impact of such activities on different stakeholders is also missing. This study identifies parameters to monitor, evaluate and determine the added value of utilising CT/MBP to achieve WaSH objectives in humanitarian response. The results of the work revealed that CT/MBP can be used to support household, community and market level interventions to effectively reduce transmission of faeco-oral diseases. Efficiency, effectiveness, sustainability, appropriateness and equity were identified as useful parameters which correlated to widely accepted frameworks against which to evaluate humanitarian action. The parameters were found to be directly applicable to the case of increasing demand and supply of point of use water treatment technology for a) disaster resilience activities, and b) post-crisis response. The need for peer review of the parameters and indicators and pilot measurement in humanitarian contexts was recognised. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Estimating drain flow from measured water table depth in layered soils under free and controlled drainage

    Science.gov (United States)

    Saadat, Samaneh; Bowling, Laura; Frankenberger, Jane; Kladivko, Eileen

    2018-01-01

    Long records of continuous drain flow are important for quantifying annual and seasonal changes in the subsurface drainage flow from drained agricultural land. Missing data due to equipment malfunction and other challenges have limited conclusions that can be made about annual flow and thus nutrient loads from field studies, including assessments of the effect of controlled drainage. Water table depth data may be available during gaps in flow data, providing a basis for filling missing drain flow data; therefore, the overall goal of this study was to examine the potential to estimate drain flow using water table observations. The objectives were to evaluate how the shape of the relationship between drain flow and water table height above drain varies depending on the soil hydraulic conductivity profile, to quantify how well the Hooghoudt equation represented the water table-drain flow relationship in five years of measured data at the Davis Purdue Agricultural Center (DPAC), and to determine the impact of controlled drainage on drain flow using the filled dataset. The shape of the drain flow-water table height relationship was found to depend on the selected hydraulic conductivity profile. Estimated drain flow using the Hooghoudt equation with measured water table height for both free draining and controlled periods compared well to observed flow with Nash-Sutcliffe Efficiency values above 0.7 and 0.8 for calibration and validation periods, respectively. Using this method, together with linear regression for the remaining gaps, a long-term drain flow record for a controlled drainage experiment at the DPAC was used to evaluate the impacts of controlled drainage on drain flow. In the controlled drainage sites, annual flow was 14-49% lower than free drainage.

  3. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments.

    Science.gov (United States)

    Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng

    2018-03-05

    The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Assessment of a Technique for Estimating Total Column Water Vapor Using Measurements of the Infrared Sky Temperature

    Science.gov (United States)

    Merceret, Francis J.; Huddleston, Lisa L.

    2014-01-01

    A method for estimating the integrated precipitable water (IPW) content of the atmosphere using measurements of indicated infrared zenith sky temperature was validated over east-central Florida. The method uses inexpensive, commercial off the shelf, hand-held infrared thermometers (IRT). Two such IRTs were obtained from a commercial vendor, calibrated against several laboratory reference sources at KSC, and used to make IR zenith sky temperature measurements in the vicinity of KSC and Cape Canaveral Air Force Station (CCAFS). The calibration and comparison data showed that these inexpensive IRTs provided reliable, stable IR temperature measurements that were well correlated with the NOAA IPW observations.

  5. Influence of temporally variable groundwater flow conditions on point measurements and contaminant mass flux estimations

    DEFF Research Database (Denmark)

    Rein, Arno; Bauer, S; Dietrich, P

    2009-01-01

    Monitoring of contaminant concentrations, e.g., for the estimation of mass discharge or contaminant degradation rates. often is based on point measurements at observation wells. In addition to the problem, that point measurements may not be spatially representative. a further complication may ari...

  6. Water storage change estimation from in situ shrinkage measurements of clay soils

    NARCIS (Netherlands)

    Brake, te B.; Ploeg, van der M.J.; Rooij, de G.H.

    2013-01-01

    The objective of this study is to assess the applicability of clay soil elevation change measurements to estimate soil water storage changes, using a simplified approach. We measured moisture contents in aggregates by EC-5 sensors, and in multiple aggregate and inter-aggregate spaces (bulk soil) by

  7. Estimating product-to-product variations in metal forming using force measurements

    NARCIS (Netherlands)

    Havinga, Gosse Tjipke; Van Den Boogaard, Ton

    2017-01-01

    The limits of production accuracy of metal forming processes can be stretched by the development of control systems for compensation of product-to-product variations. Such systems require the use of measurements from each semi-finished product. These measurements must be used to estimate the final

  8. Can i just check...? Effects of edit check questions on measurement error and survey estimates

    NARCIS (Netherlands)

    Lugtig, Peter; Jäckle, Annette

    2014-01-01

    Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to

  9. Comparing Evapotranspiration Rates Estimated from Atmospheric Flux and TDR Soil Moisture Measurements

    DEFF Research Database (Denmark)

    Schelde, Kirsten; Ringgaard, Rasmus; Herbst, Mathias

    2011-01-01

    limit estimate (disregarding dew evaporation) of evapotranspiration on dry days. During a period of 7 wk, the two independent measuring techniques were applied in a barley (Hordeum vulgare L.) field, and six dry periods were identified. Measurements of daily root zone soil moisture depletion were...

  10. A super-resolution approach for uncertainty estimation of PIV measurements

    NARCIS (Netherlands)

    Sciacchitano, A.; Wieneke, B.; Scarano, F.

    2012-01-01

    A super-resolution approach is proposed for the a posteriori uncertainty estimation of PIV measurements. The measured velocity field is employed to determine the displacement of individual particle images. A disparity set is built from the residual distance between paired particle images of

  11. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    Directory of Open Access Journals (Sweden)

    Victor Smaluk

    2014-07-01

    Full Text Available One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  12. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    Science.gov (United States)

    Smaluk, Victor; Fielder, Richard; Blednykh, Alexei; Rehm, Guenther; Bartolini, Riccardo

    2014-07-01

    One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID) straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  13. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...... with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result...

  14. Ultra-small time-delay estimation via a weak measurement technique with post-selection

    International Nuclear Information System (INIS)

    Fang, Chen; Huang, Jing-Zheng; Yu, Yang; Li, Qinzheng; Zeng, Guihua

    2016-01-01

    Weak measurement is a novel technique for parameter estimation with higher precision. In this paper we develop a general theory for the parameter estimation based on a weak measurement technique with arbitrary post-selection. The weak-value amplification model and the joint weak measurement model are two special cases in our theory. Applying the developed theory, time-delay estimation is investigated in both theory and experiments. The experimental results show that when the time delay is ultra-small, the joint weak measurement scheme outperforms the weak-value amplification scheme, and is robust against not only misalignment errors but also the wavelength dependence of the optical components. These results are consistent with theoretical predictions that have not been previously verified by any experiment. (paper)

  15. Estimation of the thermal diffusion coefficient in fusion plasmas taking frequency measurement uncertainties into account

    International Nuclear Information System (INIS)

    Van Berkel, M; Hogeweij, G M D; Van den Brand, H; De Baar, M R; Zwart, H J; Vandersteen, G

    2014-01-01

    In this paper, the estimation of the thermal diffusivity from perturbative experiments in fusion plasmas is discussed. The measurements used to estimate the thermal diffusivity suffer from stochastic noise. Accurate estimation of the thermal diffusivity should take this into account. It will be shown that formulas found in the literature often result in a thermal diffusivity that has a bias (a difference between the estimated value and the actual value that remains even if more measurements are added) or have an unnecessarily large uncertainty. This will be shown by modeling a plasma using only diffusion as heat transport mechanism and measurement noise based on ASDEX Upgrade measurements. The Fourier coefficients of a temperature perturbation will exhibit noise from the circular complex normal distribution (CCND). Based on Fourier coefficients distributed according to a CCND, it is shown that the resulting probability density function of the thermal diffusivity is an inverse non-central chi-squared distribution. The thermal diffusivity that is found by sampling this distribution will always be biased, and averaging of multiple estimated diffusivities will not necessarily improve the estimation. Confidence bounds are constructed to illustrate the uncertainty in the diffusivity using several formulas that are equivalent in the noiseless case. Finally, a different method of averaging, that reduces the uncertainty significantly, is suggested. The methodology is also extended to the case where damping is included, and it is explained how to include the cylindrical geometry. (paper)

  16. Beak measurements of octopus ( Octopus variabilis) in Jiaozhou Bay and their use in size and biomass estimation

    Science.gov (United States)

    Xue, Ying; Ren, Yiping; Meng, Wenrong; Li, Long; Mao, Xia; Han, Dongyan; Ma, Qiuyun

    2013-09-01

    Cephalopods play key roles in global marine ecosystems as both predators and preys. Regressive estimation of original size and weight of cephalopod from beak measurements is a powerful tool of interrogating the feeding ecology of predators at higher trophic levels. In this study, regressive relationships among beak measurements and body length and weight were determined for an octopus species ( Octopus variabilis), an important endemic cephalopod species in the northwest Pacific Ocean. A total of 193 individuals (63 males and 130 females) were collected at a monthly interval from Jiaozhou Bay, China. Regressive relationships among 6 beak measurements (upper hood length, UHL; upper crest length, UCL; lower hood length, LHL; lower crest length, LCL; and upper and lower beak weights) and mantle length (ML), total length (TL) and body weight (W) were determined. Results showed that the relationships between beak size and TL and beak size and ML were linearly regressive, while those between beak size and W fitted a power function model. LHL and UCL were the most useful measurements for estimating the size and biomass of O. variabilis. The relationships among beak measurements and body length (either ML or TL) were not significantly different between two sexes; while those among several beak measurements (UHL, LHL and LBW) and body weight (W) were sexually different. Since male individuals of this species have a slightly greater body weight distribution than female individuals, the body weight was not an appropriate measurement for estimating size and biomass, especially when the sex of individuals in the stomachs of predators was unknown. These relationships provided essential information for future use in size and biomass estimation of O. variabilis, as well as the estimation of predator/prey size ratios in the diet of top predators.

  17. Interpolating and Estimating Horizontal Diffuse Solar Irradiation to Provide UK-Wide Coverage: Selection of the Best Performing Models

    Directory of Open Access Journals (Sweden)

    Diane Palmer

    2017-02-01

    Full Text Available Plane-of-array (PoA irradiation data is a requirement to simulate the energetic performance of photovoltaic devices (PVs. Normally, solar data is only available as global horizontal irradiation, for a limited number of locations, and typically in hourly time resolution. One approach to handling this restricted data is to enhance it initially by interpolation to the location of interest; next, it must be translated to PoA data by separately considering the diffuse and the beam components. There are many methods of interpolation. This research selects ordinary kriging as the best performing technique by studying mathematical properties, experimentation and leave-one-out-cross validation. Likewise, a number of different translation models has been developed, most of them parameterised for specific measurement setups and locations. The work presented identifies the optimum approach for the UK on a national scale. The global horizontal irradiation will be split into its constituent parts. Divers separation models were tried. The results of each separation algorithm were checked against measured data distributed across the UK. It became apparent that while there is little difference between procedures (14 Wh/m2 mean bias error (MBE, 12 Wh/m2 root mean square error (RMSE, the Ridley, Boland, Lauret equation (a universal split algorithm consistently performed well. The combined interpolation/separation RMSE is 86 Wh/m2.

  18. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  19. Auroral Electrojet Index Designed to Provide a Global Measure, l-minute Intervals, of Auroral Zone Magnetic Activity

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Auroral Electrojet index (AE) is designed to provide a global quantitative measure of auroral zone magnetic activity produced by enhanced ionospheric currents...

  20. Auroral Electrojet Indices Designed to Provide a Global Measure, 2.5-Minute Intervals, of Auroral Zone Magnetic Activity

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Auroral Electrojet index (AE) is designed to provide a global quantitative measure of auroral zone magnetic activity produced by enhanced ionospheric currents...

  1. Estimates of Shear Stress and Measurements of Water Levels in the Lower Fox River near Green Bay, Wisconsin

    Science.gov (United States)

    Westenbroek, Stephen M.

    2006-01-01

    Turbulent shear stress in the boundary layer of a natural river system largely controls the deposition and resuspension of sediment, as well as the longevity and effectiveness of granular-material caps used to cover and isolate contaminated sediments. This report documents measurements and calculations made in order to estimate shear stress and shear velocity on the Lower Fox River, Wisconsin. Velocity profiles were generated using an acoustic Doppler current profiler (ADCP) mounted on a moored vessel. This method of data collection yielded 158 velocity profiles on the Lower Fox River between June 2003 and November 2004. Of these profiles, 109 were classified as valid and were used to estimate the bottom shear stress and velocity using log-profile and turbulent kinetic energy methods. Estimated shear stress ranged from 0.09 to 10.8 dynes per centimeter squared. Estimated coefficients of friction ranged from 0.001 to 0.025. This report describes both the field and data-analysis methods used to estimate shear-stress parameters for the Lower Fox River. Summaries of the estimated values for bottom shear stress, shear velocity, and coefficient of friction are presented. Confidence intervals about the shear-stress estimates are provided.

  2. Analytical estimation of control rod shadowing effect for excess reactivity measurement of HTTR

    International Nuclear Information System (INIS)

    Nakano, Masaaki; Fujimoto, Nozomu; Yamashita, Kiyonobu

    1999-01-01

    The fuel addition method is generally used for the excess reactivity measurement of the initial core. The control rod shadowing effect for the excess reactivity measurement has been estimated analytically for High Temperature Engineering Test Reactor (HTTR). 3-dimensional whole core analyses were carried out. The movements of control rods in measurements were simulated in the calculation. It was made clear that the value of excess reactivity strongly depend on combinations of measuring control rods and compensating control rods. The differences in excess reactivity between combinations come from the control rod shadowing effect. The shadowing effect is reduced by the use of plural number of measuring and compensating control rods to prevent deep insertion of them into the core. The measured excess reactivity in the experiments is, however, smaller than the estimated value with shadowing effect. (author)

  3. Novel experimental measuring techniques required to provide data for CFD validation

    International Nuclear Information System (INIS)

    Prasser, H.-M.

    2008-01-01

    CFD code validation requires experimental data that characterize the distributions of parameters within large flow domains. On the other hand, the development of geometry-independent closure relations for CFD codes have to rely on instrumentation and experimental techniques appropriate for the phenomena that are to be modelled, which usually requires high spatial and time resolution. The paper reports about the use of wire-mesh sensors to study turbulent mixing processes in single-phase flow as well as to characterize the dynamics of the gas-liquid interface in a vertical pipe flow. Experiments at a pipe of a nominal diameter of 200 mm are taken as the basis for the development and test of closure relations describing bubble coalescence and break-up, interfacial momentum transfer and turbulence modulation for a multi-bubble-class model. This is done by measuring the evolution of the flow structure along the pipe. The transferability of the extended CFD code to more complicated 3D flow situations is assessed against measured data from tests involving two-phase flow around an asymmetric obstacle placed in a vertical pipe. The obstacle, a half-moon-shaped diaphragm, is movable in the direction of the pipe axis; this allows the 3D gas fraction field to be recorded without changing the sensor position. In the outlook, the pressure chamber of TOPFLOW is presented, which will be used as the containment for a test facility, in which experiments can be conducted in pressure equilibrium with the inner atmosphere of the tank. In this way, flow structures can be observed by optical means through large-scale windows even at pressures of up to 5 MPa. The so-called 'Diving Chamber' technology will be used for Pressurized Thermal Shock (PTS) tests. Finally, some important trends in instrumentation for multi-phase flows will be given. This includes the state-of-art of X-ray and gamma tomography, new multi-component wire-mesh sensors, and a discussion of the potential of other non

  4. Novel experimental measuring techniques required to provide data for CFD validation

    International Nuclear Information System (INIS)

    Prasser, H.M.

    2007-01-01

    CFD code validation requires experimental data that characterize distributions of parameters within large flow domains. On the other hand, the development of geometry-independent closure relations for CFD codes have to rely on instrumentation and experimental techniques appropriate for the phenomena that are to be modelled, which usually requires high spatial and time resolution. The presentation reports about the use of wire-mesh sensors to study turbulent mixing processes in the single-phase flow as well as to characterize the dynamics of the gas-liquid interface in a vertical pipe flow. Experiments at a pipe of a nominal diameter of 200 mm are taken as the basis for the development and test of closure relations describing bubble coalescence and break-up, interfacial momentum transfer and turbulence modulation for a multi-bubble-class model. This is done by measuring the evolution of the flow structure along the pipe. The transferability of the extended CFD code to more complicated 3D flow situations is assessed against measured data from tests involving two-phase flow around an asymmetric obstacle placed in a vertical pipe. The obstacle, a half-moon-shaped diaphragm, is movable in the direction of the pipe axis; this allows the 3D gas fraction field to be recorded without changing the sensor position. In the outlook, the pressure chamber of TOPFLOW is presented, which will be used as the containment for a test facility, in which experiments can be conducted in pressure equilibrium with the inner atmosphere of the tank. In this way, flow structures can be observed by optical means through large-scale windows even at pressures of up to 5 MPa. The so-called 'Diving Chamber' technology will be used for Pressurized Thermal Shock (PTS) tests. Finally, some important trends in instrumentation for multi-phase flows will be given. This includes the state-of-art of X-ray and gamma tomography, new multi-component wire-mesh sensors, and a discussion of the potential of

  5. Supersonic shear imaging provides a reliable measurement of resting muscle shear elastic modulus

    International Nuclear Information System (INIS)

    Lacourpaille, Lilian; Hug, François; Bouillard, Killian; Nordez, Antoine; Hogrel, Jean-Yves

    2012-01-01

    The aim of the present study was to assess the reliability of shear elastic modulus measurements performed using supersonic shear imaging (SSI) in nine resting muscles (i.e. gastrocnemius medialis, tibialis anterior, vastus lateralis, rectus femoris, triceps brachii, biceps brachii, brachioradialis, adductor pollicis obliquus and abductor digiti minimi) of different architectures and typologies. Thirty healthy subjects were randomly assigned to the intra-session reliability (n = 20), inter-day reliability (n = 21) and the inter-observer reliability (n = 16) experiments. Muscle shear elastic modulus ranged from 2.99 (gastrocnemius medialis) to 4.50 kPa (adductor digiti minimi and tibialis anterior). On the whole, very good reliability was observed, with a coefficient of variation (CV) ranging from 4.6% to 8%, except for the inter-operator reliability of adductor pollicis obliquus (CV = 11.5%). The intraclass correlation coefficients were good (0.871 ± 0.045 for the intra-session reliability, 0.815 ± 0.065 for the inter-day reliability and 0.709 ± 0.141 for the inter-observer reliability). Both the reliability and the ease of use of SSI make it a potentially interesting technique that would be of benefit to fundamental, applied and clinical research projects that need an accurate assessment of muscle mechanical properties. (note)

  6. Improved Ribosome-Footprint and mRNA Measurements Provide Insights into Dynamics and Regulation of Yeast Translation

    Science.gov (United States)

    2016-02-11

    unlimited. Improved Ribosome-Footprint and mRNA Measurements Provide Insights into Dynamics and Regulation of Yeast Translation The views, opinions and...into Dynamics and Regulation of Yeast Translation Report Title Ribosome-footprint profiling provides genome-wide snapshots of translation, but...tend to slow translation. With the improved mRNA measurements, the variation attributable to translational control in exponentially growing yeast was

  7. Comparative study of speed estimators with highly noisy measurement signals for Wind Energy Generation Systems

    Energy Technology Data Exchange (ETDEWEB)

    Carranza, O. [Escuela Superior de Computo, Instituto Politecnico Nacional, Av. Juan de Dios Batiz S/N, Col. Lindavista, Del. Gustavo A. Madero 7738, D.F. (Mexico); Figueres, E.; Garcera, G. [Grupo de Sistemas Electronicos Industriales, Departamento de Ingenieria Electronica, Universidad Politecnica de Valencia, Camino de Vera S/N, 7F, 46020 Valencia (Spain); Gonzalez, L.G. [Departamento de Ingenieria Electronica, Universidad de los Andes, Merida (Venezuela)

    2011-03-15

    This paper presents a comparative study of several speed estimators to implement a sensorless speed control loop in Wind Energy Generation Systems driven by power factor correction three-phase boost rectifiers. This rectifier topology reduces the low frequency harmonics contents of the generator currents and, consequently, the generator power factor approaches unity whereas undesired vibrations of the mechanical system decrease. For implementation of the speed estimators, the compared techniques start from the measurement of electrical variables like currents and voltages, which contain low frequency harmonics of the fundamental frequency of the wind generator, as well as switching frequency components due to the boost rectifier. In this noisy environment it has been analyzed the performance of the following estimation techniques: Synchronous Reference Frame Phase Locked Loop, speed reconstruction by measuring the dc current and voltage of the rectifier and speed estimation by means of both an Extended Kalman Filter and a Linear Kalman Filter. (author)

  8. Epithelium percentage estimation facilitates epithelial quantitative protein measurement in tissue specimens.

    Science.gov (United States)

    Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui

    2013-12-01

    The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results

  9. A multitower measurement network estimate of California's methane emissions

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Seongeun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Hsu, Ying-Kuang [California Air Resources Board, Sacramento, CA (United States); Andrews, Arlyn E. [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States). Earth System Research Lab.; Bianco, Laura [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States). Earth System Research Lab.; Univ. of Colorado, Boulder, CO (United States). Cooperative Inst. for Research in Environmental Sciences; Vaca, Patrick [California Air Resources Board, Sacramento, CA (United States); Wilczak, James M. [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States). Earth System Research Lab.; Fischer, Marc L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; California State Univ. (CalState East Bay), Hayward, CA (United States). Dept. of Anthropology, Geography and Environmental Studies

    2013-09-20

    In this paper, we present an analysis of methane (CH4) emissions using atmospheric observations from five sites in California's Central Valley across different seasons (September 2010 to June 2011). CH4 emissions for spatial regions and source sectors are estimated by comparing measured CH4 mixing ratios with transport model (Weather Research and Forecasting and Stochastic Time-Inverted Lagrangian Transport) predictions based on two 0.1° CH4 (seasonally varying “California-specific” (California Greenhouse Gas Emission Measurements, CALGEM) and a static global (Emission Database for Global Atmospheric Research, release version 42, EDGAR42)) prior emission models. Region-specific Bayesian analyses indicate that for California's Central Valley, the CALGEM- and EDGAR42-based inversions provide consistent annual total CH4 emissions (32.87 ± 2.09 versus 31.60 ± 2.17 Tg CO2eq yr-1; 68% confidence interval (CI), assuming uncorrelated errors between regions). Summing across all regions of California, optimized CH4 emissions are only marginally consistent between CALGEM- and EDGAR42-based inversions (48.35 ± 6.47 versus 64.97 ± 11.85 Tg CO2eq), because emissions from coastal urban regions (where landfill and natural gas emissions are much higher in EDGAR than CALGEM) are not strongly constrained by the measurements. Combining our results with those from a recent study of the South Coast Air Basin narrows the range of estimates to 43–57 Tg CO2eq yr-1 (1.3–1.8 times higher than the current state inventory). Finally, these results suggest that the combination of rural and urban measurements will be necessary to verify future changes in California's total CH4 emissions.

  10. Measurement of MMP-9 and -12 degraded elastin (ELM) provides unique information on lung tissue degradation

    Science.gov (United States)

    2012-01-01

    Background Elastin is an essential component of selected connective tissues that provides a unique physiological elasticity. Elastin may be considered a signature protein of lungs where matrix metalloprotease (MMP) -9-and -12, may be considered the signature proteases of the macrophages, which in part are responsible for tissue damage during disease progression. Thus, we hypothesized that a MMP-9/-12 generated fragment of elastin may be a relevant biochemical maker for lung diseases. Methods Elastin fragments were identified by mass-spectrometry and one sequence, generated by MMP-9 and -12 (ELN-441), was selected for monoclonal antibody generation and used in the development of an ELISA. Soluble and insoluble elastin from lung was cleaved in vitro and the time-dependent release of fragments was assessed in the ELN-441 assay. The release of ELN-441 in human serum from patients with chronic obstructive pulmonary disease (COPD) (n = 10) and idiopathic pulmonary fibrosis (IPF) (n = 29) were compared to healthy matched controls (n = 11). Results The sequence ELN-441 was exclusively generated by MMP-9 and -12 and was time-dependently released from soluble lung elastin. ELN-441 levels were 287% higher in patients diagnosed with COPD (p elastin. This fragment was elevated in serum from patients with the lung diseases IPF and COPD, however these data needs to be validated in larger clinical settings. PMID:22818364

  11. Contralateral delay activity provides a neural measure of the number of representations in visual working memory.

    Science.gov (United States)

    Ikkai, Akiko; McCollough, Andrew W; Vogel, Edward K

    2010-04-01

    Visual working memory (VWM) helps to temporarily represent information from the visual environment and is severely limited in capacity. Recent work has linked various forms of neural activity to the ongoing representations in VWM. One piece of evidence comes from human event-related potential studies, which find a sustained contralateral negativity during the retention period of VWM tasks. This contralateral delay activity (CDA) has previously been shown to increase in amplitude as the number of memory items increases, up to the individual's working memory capacity limit. However, significant alternative hypotheses remain regarding the true nature of this activity. Here we test whether the CDA is modulated by the perceptual requirements of the memory items as well as whether it is determined by the number of locations that are being attended within the display. Our results provide evidence against these two alternative accounts and instead strongly support the interpretation that this activity reflects the current number of objects that are being represented in VWM.

  12. ELABORATION OF HIGH-VOLTAGE PULSE INSTALLATIONS AND PROVIDING THEIR OPERATION PROTECTIVE MEASURES

    Directory of Open Access Journals (Sweden)

    А. М. Hashimov

    2016-01-01

    Full Text Available The article presents design engineering methods for the high-voltage pulse installations of technological purpose for disinfection of drinking water, sewage, and edible liquids by high field micro- and nanosecond pulsing exposure. Designing potentialities are considered of the principal elements of the high-voltage part and the discharge circuit of the installations towards assuring the best efficient on-load utilization of the source energy and safe operation of the high-voltage equipment. The study shows that for disinfection of drinking water and sewage it is expedient to apply microsecond pulse actions causing the electrohydraulic effect in aqueous media with associated complex of physical processes (ultraviolet emission, generation of ozone and atomic oxygen, mechanical compression waves, etc. having detrimental effect on life activity of the microorganisms. In case of disinfecting edible liquids it is recommended to use the nanosecond pulses capable of straight permeating the biological cell nucleus, inactivating it. Meanwhile, the nutritive and biological values of the foodstuffs are saved and their organoleptic properties are improved. It is noted that in elaboration process of high-frequency pulse installations special consideration should be given to issues of the operating personnel safety discipline and securing conditions for the entire installation uninterrupted performance. With this objective in view the necessary requirements should be fulfilled on shielding the high- and low-voltage installation parts against high-frequency electromagnetic emissions registered by special differential sensors. Simultaneously, the abatement measures should be applied on the high-voltage equipment operational noise level. The authors offer a technique for noise abatement to admissible levels (lower than 80 dB A by means of coating the inside surface with shielded enclosure of densely-packed abutting sheets of porous electro-acoustic insulating

  13. Drag and Lift Estimation from 3-D Velocity Field Data Measured by Multi-Plane Stereo PIV

    OpenAIRE

    加藤, 裕之; 松島, 紀佐; 上野, 真; 小池, 俊輔; 渡辺, 重哉; Kato, Hiroyuki; Matsushima, Kisa; Ueno, Makoto; Koike, Shunsuke; Watanabe, Shigeya

    2013-01-01

    For airplane design, it is crucial to have tools that can accurately predict airplane drag and lift. Usually drag and lift prediction methods are force measurement using wind tunnel balance. Unfortunately, balance data do not provide information contribution of airplane to components to drag and lift for more precise and competitive airplane design. To obtain such information, a wake integration method for use drag and lift estimation was developed for use in wake survey data analysis. Wake s...

  14. Comparison of NIS and NHIS/NIPRCS vaccination coverage estimates. National Immunization Survey. National Health Interview Survey/National Immunization Provider Record Check Study.

    Science.gov (United States)

    Bartlett, D L; Ezzati-Rice, T M; Stokley, S; Zhao, Z

    2001-05-01

    The National Immunization Survey (NIS) and the National Health Interview Survey (NHIS) produce national coverage estimates for children aged 19 months to 35 months. The NIS is a cost-effective, random-digit-dialing telephone survey that produces national and state-level vaccination coverage estimates. The National Immunization Provider Record Check Study (NIPRCS) is conducted in conjunction with the annual NHIS, which is a face-to-face household survey. As the NIS is a telephone survey, potential coverage bias exists as the survey excludes children living in nontelephone households. To assess the validity of estimates of vaccine coverage from the NIS, we compared 1995 and 1996 NIS national estimates with results from the NHIS/NIPRCS for the same years. Both the NIS and the NHIS/NIPRCS produce similar results. The NHIS/NIPRCS supports the findings of the NIS.

  15. State of charge estimation for lithium-ion pouch batteries based on stress measurement

    International Nuclear Information System (INIS)

    Dai, Haifeng; Yu, Chenchen; Wei, Xuezhe; Sun, Zechang

    2017-01-01

    State of charge (SOC) estimation is one of the important tasks of battery management system (BMS). Being different from other researches, a novel method of SOC estimation for pouch lithium-ion battery cells based on stress measurement is proposed. With a comprehensive experimental study, we find that, the stress of the battery during charge/discharge is composed of the static stress and the dynamic stress. The static stress, which is the measured stress in equilibrium state, corresponds to SOC, this phenomenon facilitates the design of our stress-based SOC estimation. The dynamic stress, on the other hand, is influenced by multiple factors including charge accumulation or depletion, current and historical operation, thus a multiple regression model of the dynamic stress is established. Based on the relationship between static stress and SOC, as well as the dynamic stress modeling, the SOC estimation method is founded. Experimental results show that the stress-based method performs well with a good accuracy, and this method offers a novel perspective for SOC estimation. - Highlights: • A State of Charge estimator based on stress measurement is proposed. • The stress during charge and discharge is investigated with comprehensive experiments. • Effects of SOC, current, and operation history on battery stress are well studied. • A multiple regression model of the dynamic stress is established.

  16. Height and Weight Estimation From Anthropometric Measurements Using Machine Learning Regressions.

    Science.gov (United States)

    Rativa, Diego; Fernandes, Bruno J T; Roque, Alexandre

    2018-01-01

    Height and weight are measurements explored to tracking nutritional diseases, energy expenditure, clinical conditions, drug dosages, and infusion rates. Many patients are not ambulant or may be unable to communicate, and a sequence of these factors may not allow accurate estimation or measurements; in those cases, it can be estimated approximately by anthropometric means. Different groups have proposed different linear or non-linear equations which coefficients are obtained by using single or multiple linear regressions. In this paper, we present a complete study of the application of different learning models to estimate height and weight from anthropometric measurements: support vector regression, Gaussian process, and artificial neural networks. The predicted values are significantly more accurate than that obtained with conventional linear regressions. In all the cases, the predictions are non-sensitive to ethnicity, and to gender, if more than two anthropometric parameters are analyzed. The learning model analysis creates new opportunities for anthropometric applications in industry, textile technology, security, and health care.

  17. Quantitative estimation of defects from measurement obtained by remote field eddy current inspection

    International Nuclear Information System (INIS)

    Davoust, M.E.; Fleury, G.

    1999-01-01

    Remote field eddy current technique is used for dimensioning grooves that may occurs in ferromagnetic pipes. This paper proposes a method to estimate the depth and the length of corrosion grooves from measurement of a pick-up coil signal phase at different positions close to the defect. Grooves dimensioning needs the knowledge of the physical relation between measurements and defect dimensions. So, finite element calculations are performed to obtain a parametric algebraic function of the physical phenomena. By means of this model and a previously defined general approach, an estimate of groove size may be given. In this approach, algebraic function parameters and groove dimensions are linked through a polynomial function. In order to validate this estimation procedure, a statistical study has been performed. The approach is proved to be suitable for real measurements. (authors)

  18. Compensating for evanescent modes and estimating characteristic impedance in waveguide acoustic impedance measurements

    DEFF Research Database (Denmark)

    Nørgaard, Kren Rahbek; Fernandez Grande, Efren

    2017-01-01

    The ear-canal acoustic impedance and reflectance are useful for assessing conductive hearing disorders and calibrating stimulus levels in situ. However, such probe-based measurements are affected by errors due to the presence of evanescent modes and incorrect estimates or assumptions regarding...... characteristic impedance. This paper proposes a method to compensate for evanescent modes in measurements of acoustic impedance, reflectance, and sound pressure in waveguides, as well as estimating the characteristic impedance immediately in front of the probe. This is achieved by adjusting the characteristic...... impedance and subtracting an acoustic inertance from the measured impedance such that the non-causality in the reflectance is minimized in the frequency domain using the Hilbert transform. The method is thus capable of estimating plane-wave quantities of the sought-for parameters by supplying only...

  19. Peak Measurement for Vancomycin AUC Estimation in Obese Adults Improves Precision and Lowers Bias.

    Science.gov (United States)

    Pai, Manjunath P; Hong, Joseph; Krop, Lynne

    2017-04-01

    Vancomycin area under the curve (AUC) estimates may be skewed in obese adults due to weight-dependent pharmacokinetic parameters. We demonstrate that peak and trough measurements reduce bias and improve the precision of vancomycin AUC estimates in obese adults ( n = 75) and validate this in an independent cohort ( n = 31). The precision and mean percent bias of Bayesian vancomycin AUC estimates are comparable between covariate-dependent ( R 2 = 0.774, 3.55%) and covariate-independent ( R 2 = 0.804, 3.28%) models when peaks and troughs are measured but not when measurements are restricted to troughs only ( R 2 = 0.557, 15.5%). Copyright © 2017 American Society for Microbiology.

  20. iPad-assisted measurements of duration estimation in psychiatric patients and healthy control subjects.

    Directory of Open Access Journals (Sweden)

    Irene Preuschoff

    Full Text Available Handheld devices with touchscreen controls have become widespread in the general population. In this study, we examined the duration estimates (explicit timing made by patients in a major general hospital and healthy control subjects using a custom iPad application. We methodically assessed duration estimates using this novel device. We found that both psychiatric and non-psychiatric patients significantly overestimated time periods compared with healthy control subjects, who estimated elapsed time very precisely. The use of touchscreen-based methodologies can provide valuable information about patients.

  1. Stability Analysis for Li-Ion Battery Model Parameters and State of Charge Estimation by Measurement Uncertainty Consideration

    Directory of Open Access Journals (Sweden)

    Shifei Yuan

    2015-07-01

    Full Text Available Accurate estimation of model parameters and state of charge (SoC is crucial for the lithium-ion battery management system (BMS. In this paper, the stability of the model parameters and SoC estimation under measurement uncertainty is evaluated by three different factors: (i sampling periods of 1/0.5/0.1 s; (ii current sensor precisions of ±5/±50/±500 mA; and (iii voltage sensor precisions of ±1/±2.5/±5 mV. Firstly, the numerical model stability analysis and parametric sensitivity analysis for battery model parameters are conducted under sampling frequency of 1–50 Hz. The perturbation analysis is theoretically performed of current/voltage measurement uncertainty on model parameter variation. Secondly, the impact of three different factors on the model parameters and SoC estimation was evaluated with the federal urban driving sequence (FUDS profile. The bias correction recursive least square (CRLS and adaptive extended Kalman filter (AEKF algorithm were adopted to estimate the model parameters and SoC jointly. Finally, the simulation results were compared and some insightful findings were concluded. For the given battery model and parameter estimation algorithm, the sampling period, and current/voltage sampling accuracy presented a non-negligible effect on the estimation results of model parameters. This research revealed the influence of the measurement uncertainty on the model parameter estimation, which will provide the guidelines to select a reasonable sampling period and the current/voltage sensor sampling precisions in engineering applications.

  2. Block volume estimation from the discontinuity spacing measurements of mesozoic limestone quarries, Karaburun Peninsula, Turkey.

    Science.gov (United States)

    Elci, Hakan; Turk, Necdet

    2014-01-01

    Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (J(v)) method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (V(b)), the mean volumetric joint count (J(vb)) and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (V(in)) and volumetric joint count (J(vi)) values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements.

  3. Block Volume Estimation from the Discontinuity Spacing Measurements of Mesozoic Limestone Quarries, Karaburun Peninsula, Turkey

    Directory of Open Access Journals (Sweden)

    Hakan Elci

    2014-01-01

    Full Text Available Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (Jv method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (Vb, the mean volumetric joint count (Jvb and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (Vin and volumetric joint count (Jvi values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements.

  4. Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.

    Science.gov (United States)

    Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller

    2015-01-01

    An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.

  5. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  6. Relationship between parental estimate and an objective measure of child television watching

    OpenAIRE

    Roemmich James N; Fuerch Janene H; Winiewicz Dana D; Robinson Jodie L; Epstein Leonard H

    2006-01-01

    Abstract Many young children have televisions in their bedrooms, which may influence the relationship between parental estimate and objective measures of child television usage/week. Parental estimates of child television time of eighty 4–7 year old children (6.0 ± 1.2 years) at the 75th BMI percentile or greater (90.8 ± 6.8 BMI percentile) were compared to an objective measure of television time obtained from TV Allowance™ devices attached to every television in the home over a three week pe...

  7. Estimations of On-site Directional Wave Spectra from Measured Ship Responses

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2006-01-01

    include an quivalence of energy in the governing equations and, as regards the parametric concept, a frequency dependent spreading of the waves is introduced. The paper includes an extensive analysis of full-scale measurements for which the directional wave spectra are estimated by the two ship response......In general, two main concepts can be applied to estimate the on-site directional wave spectrum on the basis of ship response measurements: 1) a parametric method which assumes the wave spectrum to be composed by parameterised wave spectra, or 2) a non-parametric method where the directional wave...

  8. Water storage change estimation from in situ shrinkage measurements of clay soils

    Directory of Open Access Journals (Sweden)

    B. te Brake

    2013-05-01

    Full Text Available The objective of this study is to assess the applicability of clay soil elevation change measurements to estimate soil water storage changes, using a simplified approach. We measured moisture contents in aggregates by EC-5 sensors, and in multiple aggregate and inter-aggregate spaces (bulk soil by CS616 sensors. In a long dry period, the assumption of constant isotropic shrinkage proved invalid and a soil moisture dependant geometry factor was applied. The relative overestimation made by assuming constant isotropic shrinkage in the linear (basic shrinkage phase was 26.4% (17.5 mm for the actively shrinking layer between 0 and 60 cm. Aggregate-scale water storage and volume change revealed a linear relation for layers ≥ 30 cm depth. The range of basic shrinkage in the bulk soil was limited by delayed drying of deep soil layers, and maximum water loss in the structural shrinkage phase was 40% of total water loss in the 0–60 cm layer, and over 60% in deeper layers. In the dry period, fitted slopes of the ΔV–ΔW relationship ranged from 0.41 to 0.56 (EC-5 and 0.42 to 0.55 (CS616. Under a dynamic drying and wetting regime, slopes ranged from 0.21 to 0.38 (EC-5 and 0.22 to 0.36 (CS616. Alternating shrinkage and incomplete swelling resulted in limited volume change relative to water storage change. The slope of the ΔV–ΔW relationship depended on the drying regime, measurement scale and combined effect of different soil layers. Therefore, solely relying on surface level elevation changes to infer soil water storage changes will lead to large underestimations. Recent and future developments might provide a basis for application of shrinkage relations to field situations, but in situ observations will be required to do so.

  9. Comparison of measured and estimated maximum skin doses during CT fluoroscopy lung biopsies

    Energy Technology Data Exchange (ETDEWEB)

    Zanca, F., E-mail: Federica.Zanca@med.kuleuven.be [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium and Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven (Belgium); Jacobs, A. [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); Crijns, W. [Department of Radiotherapy, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); De Wever, W. [Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven, Belgium and Department of Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium)

    2014-07-15

    Purpose: To measure patient-specific maximum skin dose (MSD) associated with CT fluoroscopy (CTF) lung biopsies and to compare measured MSD with the MSD estimated from phantom measurements, as well as with the CTDIvol of patient examinations. Methods: Data from 50 patients with lung lesions who underwent a CT fluoroscopy-guided biopsy were collected. The CT protocol consisted of a low-kilovoltage (80 kV) protocol used in combination with an algorithm for dose reduction to the radiology staff during the interventional procedure, HandCare (HC). MSD was assessed during each intervention using EBT2 gafchromic films positioned on patient skin. Lesion size, position, total fluoroscopy time, and patient-effective diameter were registered for each patient. Dose rates were also estimated at the surface of a normal-size anthropomorphic thorax phantom using a 10 cm pencil ionization chamber placed at every 30°, for a full rotation, with and without HC. Measured MSD was compared with MSD values estimated from the phantom measurements and with the cumulative CTDIvol of the procedure. Results: The median measured MSD was 141 mGy (range 38–410 mGy) while the median cumulative CTDIvol was 72 mGy (range 24–262 mGy). The ratio between the MSD estimated from phantom measurements and the measured MSD was 0.87 (range 0.12–4.1) on average. In 72% of cases the estimated MSD underestimated the measured MSD, while in 28% of the cases it overestimated it. The same trend was observed for the ratio of cumulative CTDIvol and measured MSD. No trend was observed as a function of patient size. Conclusions: On average, estimated MSD from dose rate measurements on phantom as well as from CTDIvol of patient examinations underestimates the measured value of MSD. This can be attributed to deviations of the patient's body habitus from the standard phantom size and to patient positioning in the gantry during the procedure.

  10. Comparison of measured and estimated maximum skin doses during CT fluoroscopy lung biopsies

    International Nuclear Information System (INIS)

    Zanca, F.; Jacobs, A.; Crijns, W.; De Wever, W.

    2014-01-01

    Purpose: To measure patient-specific maximum skin dose (MSD) associated with CT fluoroscopy (CTF) lung biopsies and to compare measured MSD with the MSD estimated from phantom measurements, as well as with the CTDIvol of patient examinations. Methods: Data from 50 patients with lung lesions who underwent a CT fluoroscopy-guided biopsy were collected. The CT protocol consisted of a low-kilovoltage (80 kV) protocol used in combination with an algorithm for dose reduction to the radiology staff during the interventional procedure, HandCare (HC). MSD was assessed during each intervention using EBT2 gafchromic films positioned on patient skin. Lesion size, position, total fluoroscopy time, and patient-effective diameter were registered for each patient. Dose rates were also estimated at the surface of a normal-size anthropomorphic thorax phantom using a 10 cm pencil ionization chamber placed at every 30°, for a full rotation, with and without HC. Measured MSD was compared with MSD values estimated from the phantom measurements and with the cumulative CTDIvol of the procedure. Results: The median measured MSD was 141 mGy (range 38–410 mGy) while the median cumulative CTDIvol was 72 mGy (range 24–262 mGy). The ratio between the MSD estimated from phantom measurements and the measured MSD was 0.87 (range 0.12–4.1) on average. In 72% of cases the estimated MSD underestimated the measured MSD, while in 28% of the cases it overestimated it. The same trend was observed for the ratio of cumulative CTDIvol and measured MSD. No trend was observed as a function of patient size. Conclusions: On average, estimated MSD from dose rate measurements on phantom as well as from CTDIvol of patient examinations underestimates the measured value of MSD. This can be attributed to deviations of the patient's body habitus from the standard phantom size and to patient positioning in the gantry during the procedure

  11. Do centimetres matter? Self-reported versus estimated height measurements in parents.

    Science.gov (United States)

    Gozzi, T; Flück, Ce; L'allemand, D; Dattani, M T; Hindmarsh, P C; Mullis, P E

    2010-04-01

    An impressive discrepancy between reported and measured parental height is often observed. The aims of this study were: (a) to assess whether there is a significant difference between the reported and measured parental height; (b) to focus on the reported and, thereafter, measured height of the partner; (c) to analyse its impact on the calculated target height range. A total of 1542 individual parents were enrolled. The parents were subdivided into three groups: normal height (3-97th Centile), short (97%) stature. Overall, compared with men, women were far better in estimating their own height (p Women of normal stature underestimated the short partner and overestimated the tall partner, whereas male partners of normal stature overestimated both their short as well as tall partners. Women of tall stature estimated the heights of their short partners correctly, whereas heights of normal statured men were underestimated. On the other hand, tall men overestimated the heights of their female partners who are of normal and short stature. Furthermore, women of short stature estimated the partners of normal stature adequately, and the heights of their tall partners were overestimated. Interestingly, the short men significantly underestimated the normal, but overestimated tall female partners. Only measured heights should be used to perform accurate evaluations of height, particularly when diagnostic tests or treatment interventions are contemplated. For clinical trails, we suggest that only quality measured parental heights are acceptable, as the errors incurred in estimates may enhance/conceal true treatment effects.

  12. Facing a Problem of Electrical Energy Quality in Ship Networks-measurements, Estimation, Control

    Institute of Scientific and Technical Information of China (English)

    Tomasz Tarasiuk; Janusz Mindykowski; Xiaoyan Xu

    2003-01-01

    In this paper, electrical energy quality and its indices in ship electric networks are introduced, especially the meaning of electrical energy quality terms in voltage and active and reactive power distribution indices. Then methods of measurement of marine electrical energy indices are introduced in details and a microprocessor measurement-diagnosis system with the function of measurement and control is designed. Afterwards, estimation and control of electrical power quality of marine electrical power networks are introduced. And finally, according to the existing method of measurement and control of electrical power quality in ship power networks, the improvement of relative method is proposed.

  13. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units

    Directory of Open Access Journals (Sweden)

    Sang Cheol Lee

    2016-12-01

    Full Text Available This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter.

  14. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units

    Science.gov (United States)

    Lee, Sang Cheol; Hong, Sung Kyung

    2016-01-01

    This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter. PMID:27973429

  15. Measurement of circulation around wing-tip vortices and estimation of lift forces using stereo PIV

    Science.gov (United States)

    Asano, Shinichiro; Sato, Haru; Sakakibara, Jun

    2017-11-01

    Applying the flapping flight to the development of an aircraft as Mars space probe and a small aircraft called MAV (Micro Air Vehicle) is considered. This is because Reynolds number assumed as the condition of these aircrafts is low and similar to of insects and small birds flapping on the earth. However, it is difficult to measure the flow around the airfoil in flapping flight directly because of its three-dimensional and unsteady characteristics. Hence, there is an attempt to estimate the flow field and aerodynamics by measuring the wake of the airfoil using PIV, for example the lift estimation method based on a wing-tip vortex. In this study, at the angle of attack including the angle after stall, we measured the wing-tip vortex of a NACA 0015 cross-sectional and rectangular planform airfoil using stereo PIV. The circulation of the wing-tip vortex was calculated from the obtained velocity field, and the lift force was estimated based on Kutta-Joukowski theorem. Then, the validity of this estimation method was examined by comparing the estimated lift force and the force balance data at various angles of attack. The experiment results are going to be presented in the conference.

  16. The Euler equation with habits and measurement errors: Estimates on Russian micro data

    Directory of Open Access Journals (Sweden)

    Khvostova Irina

    2016-01-01

    Full Text Available This paper presents estimates of the consumption Euler equation for Russia. The estimation is based on micro-level panel data and accounts for the heterogeneity of agents’ preferences and measurement errors. The presence of multiplicative habits is checked using the Lagrange multiplier (LM test in a generalized method of moments (GMM framework. We obtain estimates of the elasticity of intertemporal substitution and of the subjective discount factor, which are consistent with the theoretical model and can be used for the calibration and the Bayesian estimation of dynamic stochastic general equilibrium (DSGE models for the Russian economy. We also show that the effects of habit formation are not significant. The hypotheses of multiplicative habits (external, internal, and both external and internal are not supported by the data.

  17. Relative contributions of sampling effort, measuring, and weighing to precision of larval sea lamprey biomass estimates

    Science.gov (United States)

    Slade, Jeffrey W.; Adams, Jean V.; Cuddy, Douglas W.; Neave, Fraser B.; Sullivan, W. Paul; Young, Robert J.; Fodale, Michael F.; Jones, Michael L.

    2003-01-01

    We developed two weight-length models from 231 populations of larval sea lampreys (Petromyzon marinus) collected from tributaries of the Great Lakes: Lake Ontario (21), Lake Erie (6), Lake Huron (67), Lake Michigan (76), and Lake Superior (61). Both models were mixed models, which used population as a random effect and additional environmental factors as fixed effects. We resampled weights and lengths 1,000 times from data collected in each of 14 other populations not used to develop the models, obtaining a weight and length distribution from reach resampling. To test model performance, we applied the two weight-length models to the resampled length distributions and calculated the predicted mean weights. We also calculated the observed mean weight for each resampling and for each of the original 14 data sets. When the average of predicted means was compared to means from the original data in each stream, inclusion of environmental factors did not consistently improve the performance of the weight-length model. We estimated the variance associated with measures of abundance and mean weight for each of the 14 selected populations and determined that a conservative estimate of the proportional contribution to variance associated with estimating abundance accounted for 32% to 95% of the variance (mean = 66%). Variability in the biomass estimate appears more affected by variability in estimating abundance than in converting length to weight. Hence, efforts to improve the precision of biomass estimates would be aided most by reducing the variability associated with estimating abundance.

  18. Estimation of the POD function and the LOD of a qualitative microbiological measurement method.

    Science.gov (United States)

    Wilrich, Cordula; Wilrich, Peter-Theodor

    2009-01-01

    Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.

  19. Using Indirect Turbulence Measurements for Real-Time Parameter Estimation in Turbulent Air

    Science.gov (United States)

    Martos, Borja; Morelli, Eugene A.

    2012-01-01

    The use of indirect turbulence measurements for real-time estimation of parameters in a linear longitudinal dynamics model in atmospheric turbulence was studied. It is shown that measuring the atmospheric turbulence makes it possible to treat the turbulence as a measured explanatory variable in the parameter estimation problem. Commercial off-the-shelf sensors were researched and evaluated, then compared to air data booms. Sources of colored noise in the explanatory variables resulting from typical turbulence measurement techniques were identified and studied. A major source of colored noise in the explanatory variables was identified as frequency dependent upwash and time delay. The resulting upwash and time delay corrections were analyzed and compared to previous time shift dynamic modeling research. Simulation data as well as flight test data in atmospheric turbulence were used to verify the time delay behavior. Recommendations are given for follow on flight research and instrumentation.

  20. Estimation of Uncertainty in Tracer Gas Measurement of Air Change Rates

    Directory of Open Access Journals (Sweden)

    Atsushi Iizuka

    2010-12-01

    Full Text Available Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of

  1. The continental source of glyoxal estimated by the synergistic use of spaceborne measurements and inverse modelling

    Directory of Open Access Journals (Sweden)

    A. Richter

    2009-11-01

    Full Text Available Tropospheric glyoxal and formaldehyde columns retrieved from the SCIAMACHY satellite instrument in 2005 are used with the IMAGESv2 global chemistry-transport model and its adjoint in a two-compound inversion scheme designed to estimate the continental source of glyoxal. The formaldehyde observations provide an important constraint on the production of glyoxal from isoprene in the model, since the degradation of isoprene constitutes an important source of both glyoxal and formaldehyde. Current modelling studies underestimate largely the observed glyoxal satellite columns, pointing to the existence of an additional land glyoxal source of biogenic origin. We include an extra glyoxal source in the model and we explore its possible distribution and magnitude through two inversion experiments. In the first case, the additional source is represented as a direct glyoxal emission, and in the second, as a secondary formation through the oxidation of an unspecified glyoxal precursor. Besides this extra source, the inversion scheme optimizes the primary glyoxal and formaldehyde emissions, as well as their secondary production from other identified non-methane volatile organic precursors of anthropogenic, pyrogenic and biogenic origin.

    In the first inversion experiment, the additional direct source, estimated at 36 Tg/yr, represents 38% of the global continental source, whereas the contribution of isoprene is equally important (30%, the remainder being accounted for by anthropogenic (20% and pyrogenic fluxes. The inversion succeeds in reducing the underestimation of the glyoxal columns by the model, but it leads to a severe overestimation of glyoxal surface concentrations in comparison with in situ measurements. In the second scenario, the inferred total global continental glyoxal source is estimated at 108 Tg/yr, almost two times higher than the global a priori source. The extra secondary source is the largest contribution to the global glyoxal

  2. Measurement and estimation of photosynthetically active radiation from 1961 to 2011 in Central China

    International Nuclear Information System (INIS)

    Wang, Lunche; Gong, Wei; Li, Chen; Lin, Aiwen; Hu, Bo; Ma, Yingying

    2013-01-01

    Highlights: • 6-Year observations were used to show the temporal variability of PAR and PAR/G. • Dependence of PAR on clearness index was studied in model development. • New developed models performed very well at different time scales. • The new all-weather model provided good estimates of PAR at two other sites. • Long-term variations of PAR from 1961 to 2011 in Central China were analyzed. - Abstract: Measurements of photosynthetically active radiation (PAR) and global solar radiation (G) at WHU, Central China during 2006–2011 were used to investigate the seasonal characteristics of PAR and PAR/G (PAR fraction). Both PAR and PAR fraction showed similar seasonal features that peaked in values during summer and reached their lowest in winter with annual mean values being 22.39 mol m −2 d −1 and 1.9 mol M J −1 respectively. By analyzing the dependence of PAR on cosine of solar zenith angle and clearness index at WHU, an efficient all-weather model was developed for estimating PAR values under various sky conditions, which also produced accepted estimations with high accuracy at Lhasa and Fukang. PAR dataset was then reconstructed from G for 1961–2011 through the new developed model. Annual mean daily PAR was about 23.12 mol m −2 d −1 , there was a significant decreasing trend (11.2 mol m −2 per decade) during the last 50 years in Central China, the decreases were sharpest in summer (−24.67 mol m −2 per decade) with relatively small decreases being observed in spring. Meanwhile, results also revealed that PAR began to increase at a rate of 0.1 mol m −2 per year from 1991 to 2011, which was in consistent with variation patterns of global solar radiation in the study area. The proposed all-weather PAR model would be of vital importance for ecological modeling, atmospheric environment, agricultural processes and solar energy application

  3. Measurements for kinetic parameters estimation in the RA-0 research reactor

    International Nuclear Information System (INIS)

    Gomez, A; Bellino, P A

    2012-01-01

    In the present work, measurements based on the neutron noise technique and the inverse kinetic method were performed to estimate the different kinetic parameters of the reactor in its critical state. By means of the neutron noise technique, we obtained the current calibration factor of the ionization chamber M6 belonging to the power range channels of the reactor instrumentation. The maximum current allowed compatible with the maximum power authorized by the operation license was also obtained. Using the neutron noise technique, the reduced mean reproduction time (Λ*) was estimated. This parameter plays a fundamental role in the deterministic analysis of criticality accidents. Comparison with previous values justified performing new measurements to study systematic trends in the value of Λ*. Using the inverse kinetics method, the reactivity worth of the control rods was estimated, confirming the existence of spatial effects and trends previously observed (author)

  4. Estimation and empirical properties of a firm-year measure of accounting conservatism

    OpenAIRE

    Khan, Mozaffar Nayim; Watts, Ross Leslie

    2009-01-01

    We estimate a firm-year measure of accounting conservatism, examine its empirical properties as a metric, and illustrate applications by testing new hypotheses that shed further light on the nature and effects of conservatism. The results are consistent with the measure, C_Score, capturing variation in conservatism and also predicting asymmetric earnings timeliness at horizons of up to three years ahead. Cross-sectional hypothesis tests suggest firms with longer investment cycles, higher idio...

  5. Method for Estimating Evaporative Potential (IM/CLO) from ASTM Standard Single Wind Velocity Measures

    Science.gov (United States)

    2016-08-10

    IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY MEASURES DISCLAIMER The opinions or assertions contained herein are the private views of the...USARIEM TECHNICAL REPORT T16-14 METHOD FOR ESTIMATING EVAPORATIVE POTENTIAL (IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY... ASTM STANDARD SINGLE WIND VELOCITY MEASURES Adam W. Potter Biophysics and Biomedical Modeling Division U.S. Army Research Institute of Environmental

  6. Precipitation Estimation Using Combined Radar/Radiometer Measurements Within the GPM Framework

    Science.gov (United States)

    Hou, Arthur

    2012-01-01

    satellite of JAXA, (3) the Multi-Frequency Microwave Scanning Radiometer (MADRAS) and the multi-channel microwave humidity sounder (SAPHIR) on the French-Indian Megha- Tropiques satellite, (4) the Microwave Humidity Sounder (MHS) on the National Oceanic and Atmospheric Administration (NOAA)-19, (5) MHS instruments on MetOp satellites launched by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), (6) the Advanced Technology Microwave Sounder (ATMS) on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), and (7) ATMS instruments on the NOAA-NASA Joint Polar Satellite System (JPSS) satellites. Data from Chinese and Russian microwave radiometers may also become available through international collaboration under the auspices of the Committee on Earth Observation Satellites (CEOS) and Group on Earth Observations (GEO). The current generation of global rainfall products combines observations from a network of uncoordinated satellite missions using a variety of merging techniques. GPM will provide next-generation precipitation products characterized by: (1) more accurate instantaneous precipitation estimate (especially for light rain and cold-season solid precipitation), (2) intercalibrated microwave brightness temperatures from constellation radiometers within a consistent framework, and (3) unified precipitation retrievals from constellation radiometers using a common a priori hydrometeor database constrained by combined radar/radiometer measurements provided by the GPM Core Observatory.

  7. Uncertainty estimation and multi sensor fusion for kinematic laser tracker measurements

    Science.gov (United States)

    Ulrich, Thomas

    2013-08-01

    Laser trackers are widely used to measure kinematic tasks such as tracking robot movements. Common methods to evaluate the uncertainty in the kinematic measurement include approximations specified by the manufacturers, various analytical adjustment methods and the Kalman filter. In this paper a new, real-time technique is proposed, which estimates the 4D-path (3D-position + time) uncertainty of an arbitrary path in space. Here a hybrid system estimator is applied in conjunction with the kinematic measurement model. This method can be applied to processes, which include various types of kinematic behaviour, constant velocity, variable acceleration or variable turn rates. The new approach is compared with the Kalman filter and a manufacturer's approximations. The comparison was made using data obtained by tracking an industrial robot's tool centre point with a Leica laser tracker AT901 and a Leica laser tracker LTD500. It shows that the new approach is more appropriate to analysing kinematic processes than the Kalman filter, as it reduces overshoots and decreases the estimated variance. In comparison with the manufacturer's approximations, the new approach takes account of kinematic behaviour with an improved description of the real measurement process and a reduction in estimated variance. This approach is therefore well suited to the analysis of kinematic processes with unknown changes in kinematic behaviour as well as the fusion among laser trackers.

  8. Water storage change estimation from in situ shrinkage measurements of clay soils

    NARCIS (Netherlands)

    Brake, te B.; Ploeg, van der M.J.; Rooij, de G.H.

    2012-01-01

    Water storage in the unsaturated zone is a major determinant of the hydrological behaviour of the soil, but methods to quantify soil water storage are limited. The objective of this study is to assess the applicability of clay soil surface elevation change measurements to estimate soil water storage

  9. Estimation of magnetic field in a region from measurements of the field at discrete points

    International Nuclear Information System (INIS)

    Alexopoulos, Theodore; Dris, Manolis; Lucas, Demetrios.

    1984-12-01

    A method is given to estimate the magnetic field in a region from measurements of the field in its surface and its interior. The method might be useful in high energy physics and other experiments that use large area magnets. (author)

  10. Estimation and prediction of convection-diffusion-reaction systems from point measurement

    NARCIS (Netherlands)

    Vries, D.

    2008-01-01

    Different procedures with respect to estimation and prediction of systems characterized by convection, diffusion and reactions on the basis of point measurement data, have been studied. Two applications of these convection-diffusion-reaction (CDR) systems have been used as a case study of the

  11. Measurement-Based Transmission Line Parameter Estimation with Adaptive Data Selection Scheme

    DEFF Research Database (Denmark)

    Li, Changgang; Zhang, Yaping; Zhang, Hengxu

    2017-01-01

    Accurate parameters of transmission lines are critical for power system operation and control decision making. Transmission line parameter estimation based on measured data is an effective way to enhance the validity of the parameters. This paper proposes a multi-point transmission line parameter...

  12. Measuring Cross-Section and Estimating Uncertainties with the fissionTPC

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manning, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sangiorgio, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seilhan, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-30

    The purpose of this document is to outline the prescription for measuring fission cross-sections with the NIFFTE fissionTPC and estimating the associated uncertainties. As such it will serve as a work planning guide for NIFFTE collaboration members and facilitate clear communication of the procedures used to the broader community.

  13. Estimating values for the moisture source load and buffering capacities from indoor climate measurements

    NARCIS (Netherlands)

    Schijndel, van A.W.M.

    2008-01-01

    The objective of this study is to investigate the potential for estimating values for the total size of human induced moisture source load and the total buffering (moisture storage) capacity of the interior objects with the use of relatively simple measurements and the use of heat, air, and moisture

  14. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    Science.gov (United States)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  15. Comparison of Quality Oncology Practice Initiative (QOPI) Measure Adherence Between Oncology Fellows, Advanced Practice Providers, and Attending Physicians.

    Science.gov (United States)

    Zhu, Jason; Zhang, Tian; Shah, Radhika; Kamal, Arif H; Kelley, Michael J

    2015-12-01

    Quality improvement measures are uniformly applied to all oncology providers, regardless of their roles. Little is known about differences in adherence to these measures between oncology fellows, advance practice providers (APP), and attending physicians. We investigated conformance across Quality Oncology Practice Initiative (QOPI) measures for oncology fellows, advance practice providers, and attending physicians at the Durham Veterans Affairs Medical Center (DVAMC). Using data collected from the Spring 2012 and 2013 QOPI cycles, we abstracted charts of patients and separated them based on their primary provider. Descriptive statistics and the chi-square test were calculated for each QOPI measure between fellows, advanced practice providers (APPs), and attending physicians. A total of 169 patients were reviewed. Of these, 31 patients had a fellow, 39 had an APP, and 99 had an attending as their primary oncology provider. Fellows and attending physicians performed similarly on 90 of 94 QOPI metrics. High-performing metrics included several core QOPI measures including documenting consent for chemotherapy, recommending adjuvant chemotherapy when appropriate, and prescribing serotonin antagonists when prescribing emetogenic chemotherapies. Low-performing metrics included documentation of treatment summary and taking action to address problems with emotional well-being by the second office visit. Attendings documented the plan for oral chemotherapy more often (92 vs. 63%, P=0.049). However, after the chart audit, we found that fellows actually documented the plan for oral chemotherapy 88% of the time (p=0.73). APPs and attendings performed similarly on 88 of 90 QOPI measures. The quality of oncology care tends to be similar between attendings and fellows overall; some of the significant differences do not remain significant after a second manual chart review, highlighting that the use of manual data collection for QOPI analysis is an imperfect system, and there may

  16. Estimating Concentrations of Road-Salt Constituents in Highway-Runoff from Measurements of Specific Conductance

    Science.gov (United States)

    Granato, Gregory E.; Smith, Kirk P.

    1999-01-01

    Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for

  17. Improved Forest Biomass and Carbon Estimations Using Texture Measures from WorldView-2 Satellite Data

    Directory of Open Access Journals (Sweden)

    Sandra Eckert

    2012-03-01

    Full Text Available Accurate estimation of aboveground biomass and carbon stock has gained importance in the context of the United Nations Framework Convention on Climate Change (UNFCCC and the Kyoto Protocol. In order to develop improved forest stratum–specific aboveground biomass and carbon estimation models for humid rainforest in northeast Madagascar, this study analyzed texture measures derived from WorldView-2 satellite data. A forest inventory was conducted to develop stratum-specific allometric equations for dry biomass. On this basis, carbon was calculated by applying a conversion factor. After satellite data preprocessing, vegetation indices, principal components, and texture measures were calculated. The strength of their relationships with the stratum-specific plot data was analyzed using Pearson’s correlation. Biomass and carbon estimation models were developed by performing stepwise multiple linear regression. Pearson’s correlation coefficients revealed that (a texture measures correlated more with biomass and carbon than spectral parameters, and (b correlations were stronger for degraded forest than for non-degraded forest. For degraded forest, the texture measures of Correlation, Angular Second Moment, and Contrast, derived from the red band, contributed to the best estimation model, which explained 84% of the variability in the field data (relative RMSE = 6.8%. For non-degraded forest, the vegetation index EVI and the texture measures of Variance, Mean, and Correlation, derived from the newly introduced coastal blue band, both NIR bands, and the red band, contributed to the best model, which explained 81% of the variability in the field data (relative RMSE = 11.8%. These results indicate that estimation of tropical rainforest biomass/carbon, based on very high resolution satellite data, can be improved by (a developing and applying forest stratum–specific models, and (b including textural information in addition to spectral information.

  18. Multi-slice echo-planar spectroscopic MR imaging provides both global and local metabolite measures in multiple sclerosis

    DEFF Research Database (Denmark)

    Mathiesen, Henrik Kahr; Tscherning, Thomas; Sorensen, Per Soelberg

    2005-01-01

    MR spectroscopy (MRS) provides information about neuronal loss or dysfunction by measuring decreases in N-acetyl aspartate (NAA), a metabolite widely believed to be a marker of neuronal viability. In multiple sclerosis (MS), whole-brain NAA (WBNAA) has been suggested as a marker of disease...... progression and treatment efficacy in treatment trials, and the ability to measure NAA loss in specific brain regions early in the evolution of this disease may have prognostic value. Most spectroscopic studies to date have been limited to single voxels or nonlocalized measurements of WBNAA only...

  19. Estimating body weight and body composition of chickens by using noninvasive measurements.

    Science.gov (United States)

    Latshaw, J D; Bishop, B L

    2001-07-01

    The major objective of this research was to develop equations to estimate BW and body composition using measurements taken with inexpensive instruments. We used five groups of chickens that were created with different genetic stocks and feeding programs. Four of the five groups were from broiler genetic stock, and one was from sex-linked heavy layers. The goal was to sample six males from each group when the group weight was 1.20, 1.75, and 2.30 kg. Each male was weighed and measured for back length, pelvis width, circumference, breast width, keel length, and abdominal skinfold thickness. A cloth tape measure, calipers, and skinfold calipers were used for measurement. Chickens were scanned for total body electrical conductivity (TOBEC) before being euthanized and frozen. Six females were selected at weights similar to those for males and were measured in the same way. Each whole chicken was ground, and a portion of ground material of each was used to measure water, fat, ash, and energy content. Multiple linear regression was used to estimate BW from body measurements. The best single measurement was pelvis width, with an R2 = 0.67. Inclusion of three body measurements in an equation resulted in R2 = 0.78 and the following equation: BW (g) = -930.0 + 68.5 (breast, cm) + 48.5 (circumference, cm) + 62.8 (pelvis, cm). The best single measurement to estimate body fat was abdominal skinfold thickness, expressed as a natural logarithm. Inclusion of weight and skinfold thickness resulted in R2 = 0.63 for body fat according to the following equation: fat (%) = 24.83 + 6.75 (skinfold, ln cm) - 3.87 (wt, kg). Inclusion of the result of TOBEC and the effect of sex improved the R2 to 0.78 for body fat. Regression analysis was used to develop additional equations, based on fat, to estimate water and energy contents of the body. The body water content (%) = 72.1 - 0.60 (body fat, %), and body energy (kcal/g) = 1.097 + 0.080 (body fat, %). The results of the present study

  20. Comparison of Broselow tape measurements versus mother estimations of pediatric weights

    Directory of Open Access Journals (Sweden)

    Sherafat Akaberian

    2013-06-01

    Full Text Available Background Pediatric resuscitation is challenging for therapeutic group. The most physicians have limited experience in dealing with this situation. Appropriate dosing of the drugs depends on the body weight of the children that it is usually not feasible. There is need for a fast, convenient and reliable method for body weight estimation in children. The aim of this study was to assess the accuracy of Broselow tape in children of Bushehr city. Material and Methods: This cross-sectional study was conducted in the emergency department of Aliasghar hospital. Children were between 1 month and 14 years. Children with chronic disease, 334, ill children were excluded from study. Estimated weight measured based on Broselow tape and actual weight measured by digital scale, then estimated and actual weight were compared. The results were analyzed by SPSS Software Ver 18 and T-Test, Chi-Square Test. Results: findings showed that 43.2% of total subjects were female Mean of age were 43 months. 72.5% of tape body weights were within  10% error of actual body weights. 78.9% of tape body weight was within  15% error of actual body weights. There was no significant difference between boys and girls. Conclusion: Broslow tape was easy, fast and exact for body weight estimation in emergency situation .it is more exact of body weight estimation by parents or therapeutic group so it helps therapeutic group in emergency department for accounting of medication dosage and equipment sizes.

  1. Estimation of the terrestrial gamma-ray levels from car-borne measurements

    International Nuclear Information System (INIS)

    Badran, H.M.

    1998-01-01

    A place to place variation of the gamma-radiation has been measured. The terrestrial gamma-ray levels were obtained with a portable Nal(Tl) detector. Gamma-ray levels were measured inside a car for a distance of about 220 km, from Norman up to Tulsa, Oklahoma, USA. Simultaneous measurements have also been carried out outside the vehicle and at distances 1 m and 5 m from the car. A series of data was collected every 1 mile (∼ 1.6 km). The measurements were also repeated different time under different conditions. The measured car-borne levels were correlated with the outdoor equivalent levels at 1 m above flat ground. The result permits a good estimation of the outdoor gamma-ray levels from the car measurements after the correction due to the vehicle shielding

  2. Progress on Poverty? New Estimates of Historical Trends Using an Anchored Supplemental Poverty Measure

    Science.gov (United States)

    Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane

    2016-01-01

    This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau’s recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families’ expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM’s 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a posttax/posttransfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time. PMID:27352076

  3. Progress on Poverty? New Estimates of Historical Trends Using an Anchored Supplemental Poverty Measure.

    Science.gov (United States)

    Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane

    2016-08-01

    This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau's recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families' expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM's 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a post-tax/post-transfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time.

  4. Setting the light conditions for measuring root transparency for age-at-death estimation methods.

    Science.gov (United States)

    Adserias-Garriga, Joe; Nogué-Navarro, Laia; Zapico, Sara C; Ubelaker, Douglas H

    2018-03-01

    Age-at-death estimation is one of the main goals in forensic identification, being an essential parameter to determine the biological profile, narrowing the possibility of identification in cases involving missing persons and unidentified bodies. The study of dental tissues has been long considered as a proper tool for age estimation with several age estimation methods based on them. Dental age estimation methods can be divided into three categories: tooth formation and development, post-formation changes, and histological changes. While tooth formation and growth changes are important for fetal and infant consideration, when the end of dental and skeletal growth is achieved, post-formation or biochemical changes can be applied. Lamendin et al. in J Forensic Sci 37:1373-1379, (1992) developed an adult age estimation method based on root transparency and periodontal recession. The regression formula demonstrated its accuracy of use for 40 to 70-year-old individuals. Later on, Prince and Ubelaker in J Forensic Sci 47(1):107-116, (2002) evaluated the effects of ancestry and sex and incorporated root height into the equation, developing four new regression formulas for males and females of African and European ancestry. Even though root transparency is a key element in the method, the conditions for measuring this element have not been established. The aim of the present study is to set the light conditions measured in lumens that offer greater accuracy when applying the Lamendin et al. method modified by Prince and Ubelaker. The results must be also taken into account in the application of other age estimation methodologies using root transparency to estimate age-at-death.

  5. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    Science.gov (United States)

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2017-09-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).

  6. Estimating the Uncertainty of Tensile Strength Measurement for A Photocured Material Produced by Additive Manufacturing

    Directory of Open Access Journals (Sweden)

    Adamczak Stanisław

    2014-08-01

    Full Text Available The aim of this study was to estimate the measurement uncertainty for a material produced by additive manufacturing. The material investigated was FullCure 720 photocured resin, which was applied to fabricate tensile specimens with a Connex 350 3D printer based on PolyJet technology. The tensile strength of the specimens established through static tensile testing was used to determine the measurement uncertainty. There is a need for extensive research into the performance of model materials obtained via 3D printing as they have not been studied sufficiently like metal alloys or plastics, the most common structural materials. In this analysis, the measurement uncertainty was estimated using a larger number of samples than usual, i.e., thirty instead of typical ten. The results can be very useful to engineers who design models and finished products using this material. The investigations also show how wide the scatter of results is.

  7. Measurement of total risk of spontaneous abortion: the virtue of conditional risk estimation

    DEFF Research Database (Denmark)

    Modvig, J; Schmidt, L; Damsgaard, M T

    1990-01-01

    The concepts, methods, and problems of measuring spontaneous abortion risk are reviewed. The problems touched on include the process of pregnancy verification, the changes in risk by gestational age and maternal age, and the presence of induced abortions. Methods used in studies of spontaneous...... abortion risk include biochemical assays as well as life table technique, although the latter appears in two different forms. The consequences of using either of these are discussed. It is concluded that no study design so far is appropriate for measuring the total risk of spontaneous abortion from early...... conception to the end of the 27th week. It is proposed that pregnancy may be considered to consist of two or three specific periods and that different study designs should concentrate on measuring the conditional risk within each period. A careful estimate using this principle leads to an estimate of total...

  8. Estimating air emissions from a remediation of a petroleum sump using direct measurement and modeling

    International Nuclear Information System (INIS)

    Schmidt, C.E.

    1991-01-01

    A technical approach was developed for the remediation of a petroleum sump near a residential neighborhood. The approach evolved around sludge handling/in-situ solidification and on-site disposal. As part of the development of the engineering approach, a field investigation and modeling program was conducted to predict air emissions from the proposed remediation. Field measurements using the EPA recommended surface isolation flux chamber were conducted to represent each major activity or air exposure involving waste at the site. Air emissions from freshly disturbed petroleum waste, along with engineering estimates were used to predict emissions from each phase of the engineering approach. This paper presents the remedial approach and the measurement/modeling technologies used to predict air toxic emissions from the remediation. Emphasis will be placed on the measurement approaches used in obtaining the emission rate data and the assumptions used in the modeling to estimate emissions from engineering scenarios

  9. Power system observability and dynamic state estimation for stability monitoring using synchrophasor measurements

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Kai; Qi, Junjian; Kang, Wei

    2016-08-01

    Growing penetration of intermittent resources such as renewable generations increases the risk of instability in a power grid. This paper introduces the concept of observability and its computational algorithms for a power grid monitored by the wide-area measurement system (WAMS) based on synchrophasors, e.g. phasor measurement units (PMUs). The goal is to estimate real-time states of generators, especially for potentially unstable trajectories, the information that is critical for the detection of rotor angle instability of the grid. The paper studies the number and siting of synchrophasors in a power grid so that the state of the system can be accurately estimated in the presence of instability. An unscented Kalman filter (UKF) is adopted as a tool to estimate the dynamic states that are not directly measured by synchrophasors. The theory and its computational algorithms are illustrated in detail by using a 9-bus 3-generator power system model and then tested on a 140-bus 48-generator Northeast Power Coordinating Council power grid model. Case studies on those two systems demonstrate the performance of the proposed approach using a limited number of synchrophasors for dynamic state estimation for stability assessment and its robustness against moderate inaccuracies in model parameters.

  10. A Review of Sea State Estimation Procedures Based on Measured Vessel Responses

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2016-01-01

    for shipboard SSE using measured vessel responses, resembling the concept of traditional wave rider buoys. Moreover, newly developed ideas for shipboard sea state estimation are introduced. The presented material is all based on the author’s personal experience, developed within extensive work on the subject......The operation of ships requires careful monitoring of therelated costs while, at the same time, ensuring a high level of safety. A ship’s performance with respect to safety and fuel efficiency may be compromised by the encountered waves. Consequently, it is important to estimate the surrounding...

  11. Effect of large weight reductions on measured and estimated kidney function

    DEFF Research Database (Denmark)

    von Scholten, Bernt Johan; Persson, Frederik; Svane, Maria S

    2017-01-01

    GFR (creatinine-based equations), whereas measured GFR (mGFR) and cystatin C-based eGFR would be unaffected if adjusted for body surface area. METHODS: Prospective, intervention study including 19 patients. All attended a baseline visit before gastric bypass surgery followed by a visit six months post-surgery. m...... for body surface area was unchanged. Estimates of GFR based on creatinine overestimate renal function likely due to changes in muscle mass, whereas cystatin C based estimates are unaffected. TRIAL REGISTRATION: ClinicalTrials.gov, NCT02138565 . Date of registration: March 24, 2014....

  12. Automated procedure for volumetric measurement of metastases. Estimation of tumor burden

    International Nuclear Information System (INIS)

    Fabel, M.; Bolte, H.

    2008-01-01

    Cancer is a common and increasing disease worldwide. Therapy monitoring in oncologic patient care requires accurate and reliable measurement methods for evaluation of the tumor burden. RECIST (response evaluation criteria in solid tumors) and WHO criteria are still the current standards for therapy response evaluation with inherent disadvantages due to considerable interobserver variation of the manual diameter estimations. Volumetric analysis of e.g. lung, liver and lymph node metastases, promises to be a more accurate, precise and objective method for tumor burden estimation. (orig.) [de

  13. Family-centred services in the Netherlands : validating a self-report measure for paediatric service providers

    NARCIS (Netherlands)

    Siebes, RC; Ketelaar, M; Wijnroks, L; van Schie, PE; Nijhuis, Bianca J G; Vermeer, A; Gorter, JW

    Objective: To validate the Dutch translation of the Canadian Measure of Processes of Care for Service Providers questionnaire (MPOC-SP) for use in paediatric rehabilitation settings in the Netherlands. Design: The construct validity, content validity, face validity, and reliability of the Dutch

  14. The Total Deviation Index estimated by Tolerance Intervals to evaluate the concordance of measurement devices

    Directory of Open Access Journals (Sweden)

    Ascaso Carlos

    2010-04-01

    Full Text Available Abstract Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.

  15. Measuring patient-provider communication skills in Rwanda: Selection, adaptation and assessment of psychometric properties of the Communication Assessment Tool.

    Science.gov (United States)

    Cubaka, Vincent Kalumire; Schriver, Michael; Vedsted, Peter; Makoul, Gregory; Kallestrup, Per

    2018-04-23

    To identify, adapt and validate a measure for providers' communication and interpersonal skills in Rwanda. After selection, translation and piloting of the measure, structural validity, test-retest reliability, and differential item functioning were assessed. Identification and adaptation: The 14-item Communication Assessment Tool (CAT) was selected and adapted. Content validation found all items highly relevant in the local context except two, which were retained upon understanding the reasoning applied by patients. Eleven providers and 291 patients were involved in the field-testing. Confirmatory factor analysis showed a good fit for the original one factor model. Test-retest reliability assessment revealed a mean quadratic weighted Kappa = 0.81 (range: 0.69-0.89, N = 57). The average proportion of excellent scores was 15.7% (SD: 24.7, range: 9.9-21.8%, N = 180). Differential item functioning was not observed except for item 1, which focuses on greetings, for age groups (p = 0.02, N = 180). The Kinyarwanda version of CAT (K-CAT) is a reliable and valid patient-reported measure of providers' communication and interpersonal skills. K-CAT was validated on nurses and its use on other types of providers may require further validation. K-CAT is expected to be a valuable feedback tool for providers in practice and in training. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Should English healthcare providers be penalised for failing to collect patient-reported outcome measures? A retrospective analysis.

    Science.gov (United States)

    Gutacker, Nils; Street, Andrew; Gomes, Manuel; Bojke, Chris

    2015-08-01

    The best practice tariff for hip and knee replacement in the English National Health Service (NHS) rewards providers based on improvements in patient-reported outcome measures (PROMs) collected before and after surgery. Providers only receive a bonus if at least 50% of their patients complete the preoperative questionnaire. We determined how many providers failed to meet this threshold prior to the policy introduction and assessed longitudinal stability of participation rates. Retrospective observational study using data from Hospital Episode Statistics and the national PROM programme from April 2009 to March 2012. We calculated participation rates based on either (a) all PROM records or (b) only those that could be linked to inpatient records; constructed confidence intervals around rates to account for sampling variation; applied precision weighting to allow for volume; and applied risk adjustment. NHS hospitals and private providers in England. NHS patients undergoing elective unilateral hip and knee replacement surgery. Number of providers with participation rates statistically significantly below 50%. Crude rates identified many providers that failed to achieve the 50% threshold but there were substantially fewer after adjusting for uncertainty and precision. While important, risk adjustment required restricting the analysis to linked data. Year-on-year correlation between provider participation rates was moderate. Participation rates have improved over time and only a small number of providers now fall below the threshold, but administering preoperative questionnaires remains problematic in some providers. We recommend that participation rates are based on linked data and take into account sampling variation. © The Royal Society of Medicine.

  17. Reproducibility of estimation of blood flow in the human masseter muscle from measurements of 133Xe clearance

    International Nuclear Information System (INIS)

    Monteiro, A.A.; Kopp, S.

    1989-01-01

    The reproducibility of estimations of the masseter intramuscular blood flow (IMBF) was assessed bilaterally within and between clinical sessions. The 133 Xe clearance in nine normal individuals was measured before, during, and immediately after endurance of isometric contraction at an attempted level of 50% of maximum voluntary clenching contraction. An overall low reproducibility of the estimations was found. This result was probably caused by uncertainties about the excact site of intramuscular 133 Xe deposition, errors in assessment of the plots of clearance, and variabilities in the relative contraction levels sustained, especially in the overall muscle effort. In agreement with previous reports concerning other skeletal muscles, the 133 Xe clearance method provided inconsistent estimates of absolute values of IMBF also in this clinical setting. Although there was a high intra-individual variation in the relative level of isometric contraction sustained, the endurance test induced distinct changes in IMBF, among which the estimate of post-endurance hyperemia was the most consistent for each individual. Therefore, measurements of 133 Xe clearance seem to be useful to detect intra-induvidual changes in masseter IMBF resulting from isometric work. 21 refs

  18. Formulation of uncertainty relation of error and disturbance in quantum measurement by using quantum estimation theory

    International Nuclear Information System (INIS)

    Yu Watanabe; Masahito Ueda

    2012-01-01

    Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the

  19. Relationship between parental estimate and an objective measure of child television watching

    Directory of Open Access Journals (Sweden)

    Roemmich James N

    2006-11-01

    Full Text Available Abstract Many young children have televisions in their bedrooms, which may influence the relationship between parental estimate and objective measures of child television usage/week. Parental estimates of child television time of eighty 4–7 year old children (6.0 ± 1.2 years at the 75th BMI percentile or greater (90.8 ± 6.8 BMI percentile were compared to an objective measure of television time obtained from TV Allowance™ devices attached to every television in the home over a three week period. Results showed that parents overestimate their child's television time compared to an objective measure when no television is present in the bedroom by 4 hours/week (25.4 ± 11.5 vs. 21.4 ± 9.1 in comparison to underestimating television time by over 3 hours/week (26.5 ± 17.2 vs. 29.8 ± 14.4 when the child has a television in their bedroom (p = 0.02. Children with a television in their bedroom spend more objectively measured hours in television time than children without a television in their bedroom (29.8 ± 14.2 versus 21.4 ± 9.1, p = 0.003. Research on child television watching should take into account television watching in bedrooms, since it may not be adequately assessed by parental estimates.

  20. Measurement of natural radionuclides in Malaysian bottled mineral water and consequent health risk estimation

    Energy Technology Data Exchange (ETDEWEB)

    Priharti, W.; Samat, S. B.; Yasir, M. S. [School of Applied Physics, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia)

    2015-09-25

    The radionuclides of {sup 226}Ra, {sup 232}Th and {sup 40}K were measured in ten mineral water samples, of which from the radioactivity obtained, the ingestion doses for infants, children and adults were calculated and the cancer risk for the adult was estimated. Results showed that the calculated ingestion doses for the three age categories are much lower than the average worldwide ingestion exposure of 0.29 mSv/y and the estimated cancer risk is much lower than the cancer risk of 8.40 × 10{sup −3} (estimated from the total natural radiation dose of 2.40 mSv/y). The present study concludes that the bottled mineral water produced in Malaysia is safe for daily human consumption.

  1. Oxygen transfer rate estimation in oxidation ditches from clean water measurements.

    Science.gov (United States)

    Abusam, A; Keesman, K J; Meinema, K; Van Straten, G

    2001-06-01

    Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).

  2. Measurement of natural radionuclides in Malaysian bottled mineral water and consequent health risk estimation

    Science.gov (United States)

    Priharti, W.; Samat, S. B.; Yasir, M. S.

    2015-09-01

    The radionuclides of 226Ra, 232Th and 40K were measured in ten mineral water samples, of which from the radioactivity obtained, the ingestion doses for infants, children and adults were calculated and the cancer risk for the adult was estimated. Results showed that the calculated ingestion doses for the three age categories are much lower than the average worldwide ingestion exposure of 0.29 mSv/y and the estimated cancer risk is much lower than the cancer risk of 8.40 × 10-3 (estimated from the total natural radiation dose of 2.40 mSv/y). The present study concludes that the bottled mineral water produced in Malaysia is safe for daily human consumption.

  3. Estimating the operator's performance time of emergency procedural tasks based on a task complexity measure

    International Nuclear Information System (INIS)

    Jung, Won Dae; Park, Jink Yun

    2012-01-01

    It is important to understand the amount of time required to execute an emergency procedural task in a high-stress situation for managing human performance under emergencies in a nuclear power plant. However, the time to execute an emergency procedural task is highly dependent upon expert judgment due to the lack of actual data. This paper proposes an analytical method to estimate the operator's performance time (OPT) of a procedural task, which is based on a measure of the task complexity (TACOM). The proposed method for estimating an OPT is an equation that uses the TACOM as a variable, and the OPT of a procedural task can be calculated if its relevant TACOM score is available. The validity of the proposed equation is demonstrated by comparing the estimated OPTs with the observed OPTs for emergency procedural tasks in a steam generator tube rupture scenario.

  4. A comparison of two measures of HIV diversity in multi-assay algorithms for HIV incidence estimation.

    Directory of Open Access Journals (Sweden)

    Matthew M Cousins

    Full Text Available Multi-assay algorithms (MAAs can be used to estimate HIV incidence in cross-sectional surveys. We compared the performance of two MAAs that use HIV diversity as one of four biomarkers for analysis of HIV incidence.Both MAAs included two serologic assays (LAg-Avidity assay and BioRad-Avidity assay, HIV viral load, and an HIV diversity assay. HIV diversity was quantified using either a high resolution melting (HRM diversity assay that does not require HIV sequencing (HRM score for a 239 base pair env region or sequence ambiguity (the percentage of ambiguous bases in a 1,302 base pair pol region. Samples were classified as MAA positive (likely from individuals with recent HIV infection if they met the criteria for all of the assays in the MAA. The following performance characteristics were assessed: (1 the proportion of samples classified as MAA positive as a function of duration of infection, (2 the mean window period, (3 the shadow (the time period before sample collection that is being assessed by the MAA, and (4 the accuracy of cross-sectional incidence estimates for three cohort studies.The proportion of samples classified as MAA positive as a function of duration of infection was nearly identical for the two MAAs. The mean window period was 141 days for the HRM-based MAA and 131 days for the sequence ambiguity-based MAA. The shadows for both MAAs were <1 year. Both MAAs provided cross-sectional HIV incidence estimates that were very similar to longitudinal incidence estimates based on HIV seroconversion.MAAs that include the LAg-Avidity assay, the BioRad-Avidity assay, HIV viral load, and HIV diversity can provide accurate HIV incidence estimates. Sequence ambiguity measures obtained using a commercially-available HIV genotyping system can be used as an alternative to HRM scores in MAAs for cross-sectional HIV incidence estimation.

  5. Magnetic Waveform Measurements of the PS Injection Kicker KFA45 and Future Emittance Growth Estimates

    CERN Document Server

    Forte, Vincenzo; Ferrero Colomo, Alvaro; CERN. Geneva. ATS Department

    2018-01-01

    In the framework of the LHC Injectors Upgrade (LIU) project [1], this document summarises the beam-based measurement of the magnetic waveform of the PS injection kicker KFA45 [2], from data collected during several Machine Development (MD) sessions in 2016 and 2017. In the first part of the document, the measurement methodology is introduced and the results presented and compared with the specification required for a clean transfer of the bunches coming from the PSB after the upgrade. These measurements represent, to date, the only way to reconstruct the magnetic waveform. In the second part, kicker magnetic waveform PSpice®[3] simulations are compared and tuned to the measurements. Finally the simulated (validated through measurements) waveforms are used to estimate the future expected emittance growth for the different PS injection schemes, both for (LIU target) LHC and fixed target beams.

  6. Estimation Method of Center of Inertia Frequency based on Multiple Synchronized Phasor Measurement Data

    Science.gov (United States)

    Hashiguchi, Takuhei; Watanabe, Masayuki; Goda, Tadahiro; Mitani, Yasunori; Saeki, Osamu; Hojo, Masahide; Ukai, Hiroyuki

    Open access and deregulation have been introduced into Japan and some independent power producers (IPP) and power producer and suppliers (PPS) are participating in the power generation business, which is possible to makes power system dynamics more complex. To maintain power system condition under various situations, it is essential that a real time measurement system over wide area is available. Therefore we started a project to construct an original measurement system by the use of phasor measurement units (PMU) in Japan. This paper describes the estimation method of a center of inertia frequency by applying actual measurement data. The application of this method enables us to extract power system oscillations from measurement data appropriately. Moreover, the analysis of power system dynamics for power system oscillations occurring in western Japan 60Hz system is shown. These results will lead to the clarification of power system dynamics and may make it possible to realize the monitoring of power system oscillations associated with power system stability.

  7. Multi-slice echo-planar spectroscopic MR imaging provides both global and local metabolite measures in multiple sclerosis

    DEFF Research Database (Denmark)

    Mathiesen, Henrik Kahr; Tscherning, Thomas; Sorensen, Per Soelberg

    2005-01-01

    MR spectroscopy (MRS) provides information about neuronal loss or dysfunction by measuring decreases in N-acetyl aspartate (NAA), a metabolite widely believed to be a marker of neuronal viability. In multiple sclerosis (MS), whole-brain NAA (WBNAA) has been suggested as a marker of disease...... progression and treatment efficacy in treatment trials, and the ability to measure NAA loss in specific brain regions early in the evolution of this disease may have prognostic value. Most spectroscopic studies to date have been limited to single voxels or nonlocalized measurements of WBNAA only......, measurements of metabolites in specific brain areas chosen after image acquisition (e.g., normal-appearing white matter (NAWM), gray matter (GM), and lesions) can be obtained. The identification and exclusion of regions that are inadequate for spectroscopic evaluation in global assessments can significantly...

  8. Design and construction of a cryogenic facility providing absolute measurements of radon 222 activity for developing a primary standard

    International Nuclear Information System (INIS)

    Picolo, Jean-Louis

    1995-06-01

    Radon 222 metrology is required to obtain higher accuracy in assessing human health risks from exposure to natural radiation. This paper describes the development of a cryogenic facility that allows absolute measurements of radon 222 in order to obtain a primary standard. The method selected is the condensation of a radon 222 sample on a geometrically defined cold surface with a constant, well known and adjustable temperature and facing an alpha particles detector. Counting of the alpha particles reaching the detector and the precisely known detection geometry provide an absolute measurement of the source activity. After describing the cryogenic facility, the measurement accuracy and precision are discussed and a comparison made with other measurement systems. The relative uncertainty is below 1 pc (1 σ). The facility can also be used to improve our knowledge of the nuclear properties of radon 222 and to produce secondary standards. (author) [fr

  9. Validating the use of 137Cs and 210Pbex measurements to estimate rates of soil loss from cultivated land in southern Italy

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.

    2012-01-01

    Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ( 137 Cs) and excess lead-210 ( 210 Pb ex ), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by 137 Cs and 210 Pb ex measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by 137 Cs and 210 Pb ex measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of 137 Cs and 210 Pb ex measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil loss and

  10. General problems of metrology and indirect measuring in cardiology: error estimation criteria for indirect measurements of heart cycle phase durations

    Directory of Open Access Journals (Sweden)

    Konstantine K. Mamberger

    2012-11-01

    Full Text Available Aims This paper treats general problems of metrology and indirect measurement methods in cardiology. It is aimed at an identification of error estimation criteria for indirect measurements of heart cycle phase durations. Materials and methods A comparative analysis of an ECG of the ascending aorta recorded with the use of the Hemodynamic Analyzer Cardiocode (HDA lead versus conventional V3, V4, V5, V6 lead system ECGs is presented herein. Criteria for heart cycle phase boundaries are identified with graphic mathematical differentiation. Stroke volumes of blood SV calculated on the basis of the HDA phase duration measurements vs. echocardiography data are compared herein. Results The comparative data obtained in the study show an averaged difference at the level of 1%. An innovative noninvasive measuring technology originally developed by a Russian R & D team offers measuring stroke volume of blood SV with a high accuracy. Conclusion In practice, it is necessary to take into account some possible errors in measurements caused by hardware. Special attention should be paid to systematic errors.

  11. Pursuing atmospheric water vapor retrieval through NDSA measurements between two LEO satellites: evaluation of estimation errors in spectral sensitivity measurements

    Science.gov (United States)

    Facheris, L.; Cuccoli, F.; Argenti, F.

    2008-10-01

    NDSA (Normalized Differential Spectral Absorption) is a novel differential measurement method to estimate the total content of water vapor (IWV, Integrated Water Vapor) along a tropospheric propagation path between two Low Earth Orbit (LEO) satellites. A transmitter onboard the first LEO satellite and a receiver onboard the second one are required. The NDSA approach is based on the simultaneous estimate of the total attenuations at two relatively close frequencies in the Ku/K bands and of a "spectral sensitivity parameter" that can be directly converted into IWV. The spectral sensitivity has the potential to emphasize the water vapor contribution, to cancel out all spectrally flat unwanted contributions and to limit the impairments due to tropospheric scintillation. Based on a previous Monte Carlo simulation approach, through which we analyzed the measurement accuracy of the spectral sensitivity parameter at three different and complementary frequencies, in this work we examine such accuracy for a particularly critical atmospheric status as simulated through the pressure, temperature and water vapor profiles measured by a high resolution radiosonde. We confirm the validity of an approximate expression of the accuracy and discuss the problems that may arise when tropospheric water vapor concentration is lower than expected.

  12. Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements

    International Nuclear Information System (INIS)

    Mukherjee, Suvodip; Das, Santanu; Souradeep, Tarun; Joy, Minu

    2015-01-01

    Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass m eff for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independent parameters, namely spectral index for tensor perturbation ν t and change in spectral index for scalar perturbation ν st to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of n s =0.96 by having a non-zero value of effective mass of the inflaton field m 2 eff /H 2 . The analysis with WP + Planck likelihood shows a non-zero detection of m 2 eff /H 2 with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m 2 eff /H 2  = −0.0237 ± 0.0135 which is consistent with zero

  13. On the effect of correlated measurements on the performance of distributed estimation

    KAUST Repository

    Ahmed, Mohammed

    2013-06-01

    We address the distributed estimation of an unknown scalar parameter in Wireless Sensor Networks (WSNs). Sensor nodes transmit their noisy observations over multiple access channel to a Fusion Center (FC) that reconstructs the source parameter. The received signal is corrupted by noise and channel fading, so that the FC objective is to minimize the Mean-Square Error (MSE) of the estimate. In this paper, we assume sensor node observations to be correlated with the source signal and correlated with each other as well. The correlation coefficient between two observations is exponentially decaying with the distance separation. The effect of the distance-based correlation on the estimation quality is demonstrated and compared with the case of unity correlated observations. Moreover, a closed-form expression for the outage probability is derived and its dependency on the correlation coefficients is investigated. Numerical simulations are provided to verify our analytic results. © 2013 IEEE.

  14. Estimation of sex from the anthropometric ear measurements of a Sudanese population.

    Science.gov (United States)

    Ahmed, Altayeb Abdalla; Omer, Nosyba

    2015-09-01

    The external ear and its prints have multifaceted roles in medico-legal practice, e.g., identification and facial reconstruction. Furthermore, its norms are essential in the diagnosis of congenital anomalies and the design of hearing aids. Body part dimensions vary in different ethnic groups, so the most accurate statistical estimations of biological attributes are developed using population-specific standards. Sudan lacks comprehensive data about ear norms; moreover, there is a universal rarity in assessing the possibility of sex estimation from ear dimensions using robust statistical techniques. Therefore, this study attempts to establish data for normal adult Sudanese Arabs, assessing the existence of asymmetry and developing a population-specific equation for sex estimation. The study sample comprised 200 healthy Sudanese Arab volunteers (100 males and 100 females) in the age range of 18-30years. The physiognomic ear length and width, lobule length and width, and conchal length and width measurements were obtained by direct anthropometry, using a digital sliding caliper. Moreover, indices and asymmetry were assessed. Data were analyzed using basic descriptive statistics and discriminant function analyses employing jackknife validations of classification results. All linear dimensions used were sexually dimorphic except lobular lengths. Some of the variables and indices show asymmetry. Ear dimensions showed cross-validated sex classification accuracy ranging between 60.5% and 72%. Hence, the ear measurements cannot be used as an effective tool in the estimation of sex. However, in the absence of other more reliable means, it still can be considered a supportive trait in sex estimation. Further, asymmetry should be considered in identification from the ear measurements. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Estimation of excitation forces for wave energy converters control using pressure measurements

    Science.gov (United States)

    Abdelkhalik, O.; Zou, S.; Robinett, R.; Bacelli, G.; Wilson, D.

    2017-08-01

    Most control algorithms of wave energy converters require prediction of wave elevation or excitation force for a short future horizon, to compute the control in an optimal sense. This paper presents an approach that requires the estimation of the excitation force and its derivatives at present time with no need for prediction. An extended Kalman filter is implemented to estimate the excitation force. The measurements in this approach are selected to be the pressures at discrete points on the buoy surface, in addition to the buoy heave position. The pressures on the buoy surface are more directly related to the excitation force on the buoy as opposed to wave elevation in front of the buoy. These pressure measurements are also more accurate and easier to obtain. A singular arc control is implemented to compute the steady-state control using the estimated excitation force. The estimated excitation force is expressed in the Laplace domain and substituted in the control, before the latter is transformed to the time domain. Numerical simulations are presented for a Bretschneider wave case study.

  16. Reactor building indoor wireless network channel quality estimation using RSSI measurement of wireless sensor network

    International Nuclear Information System (INIS)

    Merat, S.

    2008-01-01

    Expanding wireless communication network reception inside reactor buildings (RB) and service wings (SW) has always been a technical challenge for operations service team. This is driven by the volume of metal equipment inside the Reactor Buildings (RB) that blocks and somehow shields the signal throughout the link. In this study, to improve wireless reception inside the Reactor Building (RB), an experimental model using indoor localization mesh based on IEEE 802.15 is developed to implement a wireless sensor network. This experimental model estimates the distance between different nodes by measuring the RSSI (Received Signal Strength Indicator). Then by using triangulation and RSSI measurement, the validity of the estimation techniques is verified to simulate the physical environmental obstacles, which block the signal transmission. (author)

  17. Reactor building indoor wireless network channel quality estimation using RSSI measurement of wireless sensor network

    Energy Technology Data Exchange (ETDEWEB)

    Merat, S. [Wardrop Engineering Inc., Toronto, Ontario (Canada)

    2008-07-01

    Expanding wireless communication network reception inside reactor buildings (RB) and service wings (SW) has always been a technical challenge for operations service team. This is driven by the volume of metal equipment inside the Reactor Buildings (RB) that blocks and somehow shields the signal throughout the link. In this study, to improve wireless reception inside the Reactor Building (RB), an experimental model using indoor localization mesh based on IEEE 802.15 is developed to implement a wireless sensor network. This experimental model estimates the distance between different nodes by measuring the RSSI (Received Signal Strength Indicator). Then by using triangulation and RSSI measurement, the validity of the estimation techniques is verified to simulate the physical environmental obstacles, which block the signal transmission. (author)

  18. A new ore reserve estimation method, Yang Chizhong filtering and inferential measurement method, and its application

    International Nuclear Information System (INIS)

    Wu Jingqin.

    1989-01-01

    Yang Chizhong filtering and inferential measurement method is a new method used for variable statistics of ore deposits. In order to apply this theory to estimate the uranium ore reserves under the circumstances of regular or irregular prospecting grids, small ore bodies, less sampling points, and complex occurrence, the author has used this method to estimate the ore reserves in five ore bodies of two deposits and achieved satisfactory results. It is demonstrated that compared with the traditional block measurement method, this method is simple and clear in formula, convenient in application, rapid in calculation, accurate in results, less expensive, and high economic benefits. The procedure and experience in the application of this method and the preliminary evaluation of its results are mainly described

  19. Estimation of Dynamic Errors in Laser Optoelectronic Dimension Gauges for Geometric Measurement of Details

    Directory of Open Access Journals (Sweden)

    Khasanov Zimfir

    2018-01-01

    Full Text Available The article reviews the capabilities and particularities of the approach to the improvement of metrological characteristics of fiber-optic pressure sensors (FOPS based on estimation estimation of dynamic errors in laser optoelectronic dimension gauges for geometric measurement of details. It is shown that the proposed criteria render new methods for conjugation of optoelectronic converters in the dimension gauge for geometric measurements in order to reduce the speed and volume requirements for the Random Access Memory (RAM of the video controller which process the signal. It is found that the lower relative error, the higher the interrogetion speed of the CCD array. It is shown that thus, the maximum achievable dynamic accuracy characteristics of the optoelectronic gauge are determined by the following conditions: the parameter stability of the electronic circuits in the CCD array and the microprocessor calculator; linearity of characteristics; error dynamics and noise in all electronic circuits of the CCD array and microprocessor calculator.

  20. Inertial Measurement Units-Based Probe Vehicles: Automatic Calibration, Trajectory Estimation, and Context Detection

    KAUST Repository

    Mousa, Mustafa

    2017-12-06

    Most probe vehicle data is generated using satellite navigation systems, such as the Global Positioning System (GPS), Globalnaya navigatsionnaya sputnikovaya Sistema (GLONASS), or Galileo systems. However, because of their high cost, relatively high position uncertainty in cities, and low sampling rate, a large quantity of satellite positioning data is required to estimate traffic conditions accurately. To address this issue, we introduce a new type of traffic monitoring system based on inexpensive inertial measurement units (IMUs) as probe sensors. IMUs as traffic probes pose unique challenges in that they need to be precisely calibrated, do not generate absolute position measurements, and their position estimates are subject to accumulating errors. In this paper, we address each of these challenges and demonstrate that the IMUs can reliably be used as traffic probes. After discussing the sensing technique, we present an implementation of this system using a custom-designed hardware platform, and validate the system with experimental data.

  1. Inertial Measurement Units-Based Probe Vehicles: Automatic Calibration, Trajectory Estimation, and Context Detection

    KAUST Repository

    Mousa, Mustafa; Sharma, Kapil; Claudel, Christian G.

    2017-01-01

    Most probe vehicle data is generated using satellite navigation systems, such as the Global Positioning System (GPS), Globalnaya navigatsionnaya sputnikovaya Sistema (GLONASS), or Galileo systems. However, because of their high cost, relatively high position uncertainty in cities, and low sampling rate, a large quantity of satellite positioning data is required to estimate traffic conditions accurately. To address this issue, we introduce a new type of traffic monitoring system based on inexpensive inertial measurement units (IMUs) as probe sensors. IMUs as traffic probes pose unique challenges in that they need to be precisely calibrated, do not generate absolute position measurements, and their position estimates are subject to accumulating errors. In this paper, we address each of these challenges and demonstrate that the IMUs can reliably be used as traffic probes. After discussing the sensing technique, we present an implementation of this system using a custom-designed hardware platform, and validate the system with experimental data.

  2. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information

    International Nuclear Information System (INIS)

    Burr, T.; Croft, S.; Krieger, T.; Martin, K.; Norman, C.; Walsh, S.

    2016-01-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  3. Rangeland monitoring using remote sensing: comparison of cover estimates from field measurements and image analysis

    Directory of Open Access Journals (Sweden)

    Ammon Boswell

    2017-01-01

    Full Text Available Rangeland monitoring is important for evaluating and assessing semi-arid plant communities. Remote sensing provides an effective tool for rapidly and accurately assessing rangeland vegetation and other surface attributes such as bare soil and rock. The purpose of this study was to evaluate the efficacy of remote sensing as a surrogate for field-based sampling techniques in detecting ground cover features (i.e., trees, shrubs, herbaceous cover, litter, surface, and comparing results with field-based measurements collected by the Utah Division of Wildlife Resources Range Trent Program. In the field, five 152 m long transects were used to sample plant, litter, rock, and bare-ground cover using the Daubenmire ocular estimate method. At the same location of each field plot, a 4-band (R,G,B,NIR, 25 cm pixel resolution, remotely sensed image was taken from a fixed-wing aircraft. Each image was spectrally classified producing 4 cover classes (tree, shrub, herbaceous, surface. No significant differences were detected between canopy cover collected remotely and in the field for tree (P = 0.652, shrub (P = 0.800, and herbaceous vegetation (P = 0.258. Surface cover was higher in field plots (P < 0.001, likely in response to the methods used to sample surface features by field crews. Accurately classifying vegetation and other features from remote sensed information can improve the efficiency of collecting vegetation and surface data. This information can also be used to improve data collection frequency for rangeland monitoring and to efficiently quantify ecological succession patterns.

  4. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context....

  5. Longshore sediment transport rate-measurement and estimation, central west coast of India

    Digital Repository Service at National Institute of Oceanography (India)

    SanilKumar, V.; Anand, N.M.; Chandramohan, P.; Naik, G.N.

    rate—measurement and estimation, central west coast of India V. Sanil Kumar * , N.M. Anand, P. Chandramohan, G.N. Naik Ocean Engineering Division, National Institute of Oceanography, Donapaula, Goa 403 004, India Received 26 October 2001; received... engineering designs. The longshore current generated by obliquely incident breaking waves plays an important role in transporting sediment in the surf zone. The longshore current velocity varies across the surf zone, reaching a maximum value close to the wave...

  6. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  7. Regional inversion of CO2 ecosystem fluxes from atmospheric measurements. Reliability of the uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)

    2013-07-01

    The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than

  8. Estimation of thorium lung burden in mineral separation plant workers by thoron-in-breath measurements

    International Nuclear Information System (INIS)

    Radhakrishnan, Sujata; Sreekumar, K.; Tripathi, R.M.; Puranik, V.D.; Selvan, Esai

    2010-01-01

    The Minerals Separation Plant (MSP) of M/s Indian Rare Earths Ltd. (IREL) at Manavalakurichi in Tamil Nadu is engaged in the processing of beach sands to separate ilmenite, monazite, rutile, sillimanite, garnet, and zircon. The present study has been carried out on nearly 200 workers of the mineral separation plant who are chronically exposed to the radiation hazards. Measurement of thoron in the exhaled breath of the worker is an indirect method of estimating the body burden with regard to Th

  9. Chapter 21: Estimating Net Savings - Common Practices. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Violette, Daniel M. [Navigant, Boulder, CO (United States); Rathbun, Pamela [Tetra Tech, Madison, WI (United States)

    2017-11-02

    This chapter focuses on the methods used to estimate net energy savings in evaluation, measurement, and verification (EM and V) studies for energy efficiency (EE) programs. The chapter provides a definition of net savings, which remains an unsettled topic both within the EE evaluation community and across the broader public policy evaluation community, particularly in the context of attribution of savings to a program. The chapter differs from the measure-specific Uniform Methods Project (UMP) chapters in both its approach and work product. Unlike other UMP resources that provide recommended protocols for determining gross energy savings, this chapter describes and compares the current industry practices for determining net energy savings but does not prescribe methods.

  10. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    Science.gov (United States)

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  11. Comparison of Satellite Rainfall Estimates and Rain Gauge Measurements in Italy, and Impact on Landslide Modeling

    Directory of Open Access Journals (Sweden)

    Mauro Rossi

    2017-12-01

    Full Text Available Landslides can be triggered by intense or prolonged rainfall. Rain gauge measurements are commonly used to predict landslides even if satellite rainfall estimates are available. Recent research focuses on the comparison of satellite estimates and gauge measurements. The rain gauge data from the Italian network (collected in the system database “Verifica Rischio Frana”, VRF are compared with the National Aeronautics and Space Administration (NASA Tropical Rainfall Measuring Mission (TRMM products. For the purpose, we couple point gauge and satellite rainfall estimates at individual grid cells, evaluating the correlation between gauge and satellite data in different morpho-climatological conditions. We then analyze the statistical distributions of both rainfall data types and the rainfall events derived from them. Results show that satellite data underestimates ground data, with the largest differences in mountainous areas. Power-law models, are more appropriate to correlate gauge and satellite data. The gauge and satellite-based products exhibit different statistical distributions and the rainfall events derived from them differ. In conclusion, satellite rainfall cannot be directly compared with ground data, requiring local investigation to account for specific morpho-climatological settings. Results suggest that satellite data can be used for forecasting landslides, only performing a local scaling between satellite and ground data.

  12. Real-time estimation of helicopter rotor blade kinematics through measurement of rotation induced acceleration

    Science.gov (United States)

    Allred, C. Jeff; Churchill, David; Buckner, Gregory D.

    2017-07-01

    This paper presents a novel approach to monitoring rotor blade flap, lead-lag and pitch using an embedded gyroscope and symmetrically mounted MEMS accelerometers. The central hypothesis is that differential accelerometer measurements are proportional only to blade motion; fuselage acceleration and blade bending are inherently compensated for. The inverse kinematic relationships (from blade position to acceleration and angular rate) are derived and simulated to validate this hypothesis. An algorithm to solve the forward kinematic relationships (from sensor measurement to blade position) is developed using these simulation results. This algorithm is experimentally validated using a prototype device. The experimental results justify continued development of this kinematic estimation approach.

  13. Classical and modern power spectrum estimation for tune measurement in CSNS RCS

    International Nuclear Information System (INIS)

    Yang Xiaoyu; Xu Taoguang; Fu Shinian; Zeng Lei; Bian Xiaojuan

    2013-01-01

    Precise measurement of betatron tune is required for good operating condition of CSNS RCS. The fractional part of betatron tune is important and it can be measured by analyzing the signals of beam position from the appointed BPM. Usually these signals are contaminated during the acquisition process, therefore several power spectrum methods are used to improve the frequency resolution. In this article classical and modern power spectrum methods are used. In order to compare their performance, the results of simulation data and IQT data from J-PARC RCS are discussed. It is shown that modern power spectrum estimation has better performance than the classical ones, though the calculation is more complex. (authors)

  14. Comparing Food Provided and Wasted before and after Implementing Measures against Food Waste in Three Healthcare Food Service Facilities

    Directory of Open Access Journals (Sweden)

    Christina Strotmann

    2017-08-01

    Full Text Available The aim of the study was to reduce food waste in a hospital, a hospital cafeteria, and a residential home by applying a participatory approach in which the employees were integrated into the process of developing and implementing measures. Initially, a process analysis was undertaken to identify the processes and structures existing in each institution. This included a 2-week measurement of the quantities of food produced and wasted. After implementing the measures, a second measurement was conducted and the results of the two measurements were compared. The average waste rate in the residential home was significantly reduced from 21.4% to 13.4% and from 19.8% to 12.8% in the cafeteria. In the hospital, the average waste rate remained constant (25.6% and 26.3% during the reference and control measurements. However, quantities of average daily food provided and wasted per person in the hospital declined. Minimizing overproduction, i.e., aligning the quantity of meals produced to that required, is essential to reducing serving losses. Compliance of meal quality and quantity with customer expectations, needs, and preferences, i.e., the individualization of food supply, reduces plate waste. Moreover, establishing an efficient communication structure involving all actors along the food supply chain contributes to decreasing food waste.

  15. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    International Nuclear Information System (INIS)

    Pontailler, J.-Y.; Hymus, G.J.; Drake, B.G.

    2003-01-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m 2 plots in February 2000 and two 4m 2 plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  16. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pontailler, J.-Y. [Univ. Paris-Sud XI, Dept. d' Ecophysiologie Vegetale, Orsay Cedex (France); Hymus, G.J.; Drake, B.G. [Smithsonian Environmental Research Center, Kennedy Space Center, Florida (United States)

    2003-06-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m{sup 2} plots in February 2000 and two 4m{sup 2} plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  17. A New Proxy Measurement Algorithm with Application to the Estimation of Vertical Ground Reaction Forces Using Wearable Sensors.

    Science.gov (United States)

    Guo, Yuzhu; Storm, Fabio; Zhao, Yifan; Billings, Stephen A; Pavic, Aleksandar; Mazzà, Claudia; Guo, Ling-Zhong

    2017-09-22

    Measurement of the ground reaction forces (GRF) during walking is typically limited to laboratory settings, and only short observations using wearable pressure insoles have been reported so far. In this study, a new proxy measurement method is proposed to estimate the vertical component of the GRF (vGRF) from wearable accelerometer signals. The accelerations are used as the proxy variable. An orthogonal forward regression algorithm (OFR) is employed to identify the dynamic relationships between the proxy variables and the measured vGRF using pressure-sensing insoles. The obtained model, which represents the connection between the proxy variable and the vGRF, is then used to predict the latter. The results have been validated using pressure insoles data collected from nine healthy individuals under two outdoor walking tasks in non-laboratory settings. The results show that the vGRFs can be reconstructed with high accuracy (with an average prediction error of less than 5.0%) using only one wearable sensor mounted at the waist (L5, fifth lumbar vertebra). Proxy measures with different sensor positions are also discussed. Results show that the waist acceleration-based proxy measurement is more stable with less inter-task and inter-subject variability than the proxy measures based on forehead level accelerations. The proposed proxy measure provides a promising low-cost method for monitoring ground reaction forces in real-life settings and introduces a novel generic approach for replacing the direct determination of difficult to measure variables in many applications.

  18. A family-specific use of the Measure of Processes of Care for Service Providers (MPOC-SP).

    Science.gov (United States)

    Siebes, R C; Nijhuis, B J G; Boonstra, A M; Ketelaar, M; Wijnroks, L; Reinders-Messelink, H A; Postema, K; Vermeer, A

    2008-03-01

    To examine the validity and utility of the Dutch Measure of Processes of Care for Service Providers (MPOC-SP) as a family-specific measure. A validation study. Five paediatric rehabilitation settings in the Netherlands. The MPOC-SP was utilized in a general (reflecting on services provided for all clients and clients' families) and family-specific way (filled out in reference to a particular child and his or her family). Professionals providing rehabilitation and educational services to children with cerebral palsy. For construct validity, Pearson's product-moment correlation coefficients (r ) between the scales were calculated. The ability of service providers to discriminate between general and family-specific ratings was examined by exploration of absolute difference scores. One hundred and sixteen service professionals filled out 240 family-specific MPOC-SPs. In addition, a subgroup of 81 professionals filled out a general MPOC-SP. For each professional, family-specific and general scores were paired, resulting in 151 general-family-specific MPOC-SP pairs. The construct validity analyses confirmed the scale structure: 21 items (77.8%) loaded highest in the original MPOC-SP factors, and all items correlated best and significantly with their own scale score (r 0.565 to 0.897; PService providers were able to discriminate between general and family-specific MPOC-SP item ratings. The family-specific MPOC-SP is a valid measure that can be used for individual evaluation of family-centred services and can be the impetus for family-related quality improvement.

  19. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    Science.gov (United States)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  20. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  1. Routine outcome measurement in mental health service consumers: who should provide support for the self-assessments?

    Science.gov (United States)

    Gelkopf, Marc; Pagorek-Eshel, Shira; Trauer, Tom; Roe, David

    2015-06-01

    This study examined whether mental health community service users completed outcome self-reports differently when assessments were supervised by internal vs. external staff. The examination of potential differences between the two has useful implications for mental health systems that take upon themselves the challenge of Routine Outcome Measurement (ROM), as it might impact allocation of public resources and managed care program planning. 73 consumers completed the Manchester Short Assessment of Quality of Life (MANSA), a shortened version of the Recovery Assessment Scale (RAS), and a functioning questionnaire. Questionnaires were administered, once using support provided by internal staff and once using support provided by external professional staff, with a one-month time interval and in random order. A MANOVA Repeated Measures showed no differences in outcomes of quality of life and recovery between internal and external support. Functioning scores were higher for the internal support when the internal assessments were performed first. Overall, except for the differences in functioning assessment, outcome scores were not determined by the supporting agency. This might indicate that when measuring quality of life and recovery, different supporting methods can be used to gather outcome measures and internal staff might be a good default agency to do this. Differences found in functioning assessment are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Merging Psychophysical and Psychometric Theory to Estimate Global Visual State Measures from Forced-Choices

    International Nuclear Information System (INIS)

    Massof, Robert W; Schmidt, Karen M; Laby, Daniel M; Kirschen, David; Meadows, David

    2013-01-01

    Visual acuity, a forced-choice psychophysical measure of visual spatial resolution, is the sine qua non of clinical visual impairment testing in ophthalmology and optometry patients with visual system disorders ranging from refractive error to retinal, optic nerve, or central visual system pathology. Visual acuity measures are standardized against a norm, but it is well known that visual acuity depends on a variety of stimulus parameters, including contrast and exposure duration. This paper asks if it is possible to estimate a single global visual state measure from visual acuity measures as a function of stimulus parameters that can represent the patient's overall visual health state with a single variable. Psychophysical theory (at the sensory level) and psychometric theory (at the decision level) are merged to identify the conditions that must be satisfied to derive a global visual state measure from parameterised visual acuity measures. A global visual state measurement model is developed and tested with forced-choice visual acuity measures from 116 subjects with no visual impairments and 560 subjects with uncorrected refractive error. The results are in agreement with the expectations of the model

  3. Continuous estimates of dynamic cerebral autoregulation: influence of non-invasive arterial blood pressure measurements

    International Nuclear Information System (INIS)

    Panerai, R B; Smith, S M; Rathbone, W E; Samani, N J; Sammons, E L; Bentley, S; Potter, J F

    2008-01-01

    Temporal variability of parameters which describe dynamic cerebral autoregulation (CA), usually quantified by the short-term relationship between arterial blood pressure (BP) and cerebral blood flow velocity (CBFV), could result from continuous adjustments in physiological regulatory mechanisms or could be the result of artefacts in methods of measurement, such as the use of non-invasive measurements of BP in the finger. In 27 subjects (61 ± 11 years old) undergoing coronary artery angioplasty, BP was continuously recorded at rest with the Finapres device and in the ascending aorta (Millar catheter, BP AO ), together with bilateral transcranial Doppler ultrasound in the middle cerebral artery, surface ECG and transcutaneous CO 2 . Dynamic CA was expressed by the autoregulation index (ARI), ranging from 0 (absence of CA) to 9 (best CA). Time-varying, continuous estimates of ARI (ARI(t)) were obtained with an autoregressive moving-average (ARMA) model applied to a 60 s sliding data window. No significant differences were observed in the accuracy and precision of ARI(t) between estimates derived from the Finapres and BP AO . Highly significant correlations were obtained between ARI(t) estimates from the right and left middle cerebral artery (MCA) (Finapres r = 0.60 ± 0.20; BP AO r = 0.56 ± 0.22) and also between the ARI(t) estimates from the Finapres and BP AO (right MCA r = 0.70 ± 0.22; left MCA r = 0.74 ± 0.22). Surrogate data showed that ARI(t) was highly sensitive to the presence of noise in the CBFV signal, with both the bias and dispersion of estimates increasing for lower values of ARI(t). This effect could explain the sudden drops of ARI(t) to zero as reported previously. Simulated sudden changes in ARI(t) can be detected by the Finapres, but the bias and variability of estimates also increase for lower values of ARI. In summary, the Finapres does not distort time-varying estimates of dynamic CA obtained with a sliding window combined with an ARMA model

  4. A Comparison of Two Measures of HIV Diversity in Multi-Assay Algorithms for HIV Incidence Estimation

    Science.gov (United States)

    Cousins, Matthew M.; Konikoff, Jacob; Sabin, Devin; Khaki, Leila; Longosz, Andrew F.; Laeyendecker, Oliver; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Kobin, Beryl A.; Wheeler, Darrell; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Brookmeyer, Ron; Eshleman, Susan H.

    2014-01-01

    Background Multi-assay algorithms (MAAs) can be used to estimate HIV incidence in cross-sectional surveys. We compared the performance of two MAAs that use HIV diversity as one of four biomarkers for analysis of HIV incidence. Methods Both MAAs included two serologic assays (LAg-Avidity assay and BioRad-Avidity assay), HIV viral load, and an HIV diversity assay. HIV diversity was quantified using either a high resolution melting (HRM) diversity assay that does not require HIV sequencing (HRM score for a 239 base pair env region) or sequence ambiguity (the percentage of ambiguous bases in a 1,302 base pair pol region). Samples were classified as MAA positive (likely from individuals with recent HIV infection) if they met the criteria for all of the assays in the MAA. The following performance characteristics were assessed: (1) the proportion of samples classified as MAA positive as a function of duration of infection, (2) the mean window period, (3) the shadow (the time period before sample collection that is being assessed by the MAA), and (4) the accuracy of cross-sectional incidence estimates for three cohort studies. Results The proportion of samples classified as MAA positive as a function of duration of infection was nearly identical for the two MAAs. The mean window period was 141 days for the HRM-based MAA and 131 days for the sequence ambiguity-based MAA. The shadows for both MAAs were cross-sectional HIV incidence estimates that were very similar to longitudinal incidence estimates based on HIV seroconversion. Conclusions MAAs that include the LAg-Avidity assay, the BioRad-Avidity assay, HIV viral load, and HIV diversity can provide accurate HIV incidence estimates. Sequence ambiguity measures obtained using a commercially-available HIV genotyping system can be used as an alternative to HRM scores in MAAs for cross-sectional HIV incidence estimation. PMID:24968135

  5. Estimation of daily global solar irradiation by coupling ground measurements of bright sunshine hours to satellite imagery

    International Nuclear Information System (INIS)

    Ener Rusen, Selmin; Hammer, Annette; Akinoglu, Bulent G.

    2013-01-01

    In this work, the current version of the satellite-based HELIOSAT method and ground-based linear Ångström–Prescott type relations are used in combination. The first approach is based on the use of a correlation between daily bright sunshine hours (s) and cloud index (n). In the second approach a new correlation is proposed between daily solar irradiation and daily data of s and n which is based on a physical parameterization. The performances of the proposed two combined models are tested against conventional methods. We test the use of obtained correlation coefficients for nearby locations. Our results show that the use of sunshine duration together with the cloud index is quite satisfactory in the estimation of daily horizontal global solar irradiation. We propose to use the new approaches to estimate daily global irradiation when the bright sunshine hours data is available for the location of interest, provided that some regression coefficients are determined using the data of a nearby station. In addition, if surface data for a close location does not exist then it is recommended to use satellite models like HELIOSAT or the new approaches instead the Ångström type models. - Highlights: • Satellite imagery together with surface measurements in solar radiation estimation. • The new coupled and conventional models (satellite and ground-based) are analyzed. • New models result in highly accurate estimation of daily global solar irradiation

  6. Estimation of Uncertainty in Aerosol Concentration Measured by Aerosol Sampling System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Chan; Song, Yong Jae; Jung, Woo Young; Lee, Hyun Chul; Kim, Gyu Tae; Lee, Doo Yong [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    FNC Technology Co., Ltd has been developed test facilities for the aerosol generation, mixing, sampling and measurement under high pressure and high temperature conditions. The aerosol generation system is connected to the aerosol mixing system which injects SiO{sub 2}/ethanol mixture. In the sampling system, glass fiber membrane filter has been used to measure average mass concentration. Based on the experimental results using main carrier gas of steam and air mixture, the uncertainty estimation of the sampled aerosol concentration was performed by applying Gaussian error propagation law. FNC Technology Co., Ltd. has been developed the experimental facilities for the aerosol measurement under high pressure and high temperature. The purpose of the tests is to develop commercial test module for aerosol generation, mixing and sampling system applicable to environmental industry and safety related system in nuclear power plant. For the uncertainty calculation of aerosol concentration, the value of the sampled aerosol concentration is not measured directly, but must be calculated from other quantities. The uncertainty of the sampled aerosol concentration is a function of flow rates of air and steam, sampled mass, sampling time, condensed steam mass and its absolute errors. These variables propagate to the combination of variables in the function. Using operating parameters and its single errors from the aerosol test cases performed at FNC, the uncertainty of aerosol concentration evaluated by Gaussian error propagation law is less than 1%. The results of uncertainty estimation in the aerosol sampling system will be utilized for the system performance data.

  7. Estimation of potassium concentration in coconut water by beta radioactivity measurement

    International Nuclear Information System (INIS)

    Reddy, P.J.; Narayani, K.; Bhade, S.P.D.; Anilkumar, S.; Kolekar, R.V.; Singh, Rajvir; Pradeepkumar, K.S.

    2014-01-01

    Potassium is widely distributed in soil, in all vegetable, fruits and animal tissues. Approximately half the radioactivity found in humans comes from 40 K. Potassium is an essential element in our diet since it is required for proper nerve and muscle function, as well as for maintaining the fluid balance of cells and heart rhythm. Potassium can enter the body mainly consuming fruits, vegetables and food. Tender coconut water is consumed widely as natural refreshing drink which is rich in potassium. The simple way to determine 40 K activity is by gamma ray spectrometry. However, the low abundance of this gamma photon makes the technique less sensitive compared to gross beta measurement. Many analytical methods are reported for potassium estimation which is time consuming and destructive in nature. A unique way to estimate 40 K by beta activity is by Cerenkov Counting technique using Liquid Scintillation Analyzer. Also much lower detection limit is achieved, allowing for greater precision. In this work, we have compared two methods to arrive at the potassium concentration in tender and matured coconut water by measuring 40 K. One is non-scintillator method based on measurement of the Cerenkov radiation generated from the high-energy β of 40 K. The second method is based on beta activity measurement using low background Gas flow counter

  8. Providers' perceptions of the implementation of a performance measurement system for substance abuse treatment: A process evaluation of the Service Quality Measures initiative.

    Science.gov (United States)

    Myers, Bronwyn; Williams, Petal Petersen; Johnson, Kim; Govender, Rajen; Manderscheid, Ron; Koch, J Randy

    2016-02-22

    In South Africa, concerns exist about the quality of substance abuse treatment. We developed a performance measurement system, known as the Service Quality Measures (SQM) initiative, to monitor the quality of treatment and assess efforts to improve quality of care. In 2014, the SQM system was implemented at six treatment sites to evaluate how implementation protocols could be improved in preparation for wider roll-out. To describe providers' perceptions of the feasibility and acceptability of implementing the SQM system, including barriers to and facilitators of implementation. We conducted 15 in-depth interviews (IDIs) with treatment providers from six treatment sites (two sites in KwaZulu-Natal and four in the Western Cape). Providers were asked about their experiences in implementing the system, the perceived feasibility of the system, and barriers to implementation. All IDIs were audio-recorded and transcribed verbatim. A framework approach was used to analyse the data. Providers reported that the SQM system was feasible to implement and acceptable to patients and providers. Issues identified through the IDIs included a perceived lack of clarity about sequencing of key elements in the implementation of the SQM system, questions on integration of the system into clinical care pathways, difficulties in tracking patients through the system, and concerns about maximising patient participation in the process. Findings suggest that the SQM system is feasible to implement and acceptable to providers, but that some refinements to the implementation protocols are needed to maximise patient participation and the likelihood of sustained implementation.

  9. Assessment and Calibration of Ultrasonic Measurement Errors in Estimating Weathering Index of Stone Cultural Heritage

    Science.gov (United States)

    Lee, Y.; Keehm, Y.

    2011-12-01

    Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by

  10. Estimating temperature reactivity coefficients by experimental procedures combined with isothermal temperature coefficient measurements and dynamic identification

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Aoki, Yukinori; Shimazu, Yoichiro; Yamasaki, Masatoshi; Hanayama, Yasushi

    2006-01-01

    A method to evaluate the moderator coefficient (MTC) and the Doppler coefficient through experimental procedures performed during reactor physics tests of PWR power plants is proposed. This method combines isothermal temperature coefficient (ITC) measurement experiments and reactor power transient experiments at low power conditions for dynamic identification. In the dynamic identification, either one of temperature coefficients can be determined in such a way that frequency response characteristics of the reactivity change observed by a digital reactivity meter is reproduced from measured data of neutron count rate and the average coolant temperature. The other unknown coefficient can also be determined by subtracting the coefficient obtained from the dynamic identification from ITC. As the proposed method can directly estimate the Doppler coefficient, the applicability of the conventional core design codes to predict the Doppler coefficient can be verified for new types of fuels such as mixed oxide fuels. The digital simulation study was carried out to show the feasibility of the proposed method. The numerical analysis showed that the MTC and the Doppler coefficient can be estimated accurately and even if there are uncertainties in the parameters of the reactor kinetics model, the accuracies of the estimated values are not seriously impaired. (author)

  11. Measurement of the incorporation rates of four amino acids into proteins for estimating bacterial production.

    Science.gov (United States)

    Servais, P

    1995-03-01

    In aquatic ecosystems, [(3)H]thymidine incorporation into bacterial DNA and [(3)H]leucine incorporation into proteins are usually used to estimate bacterial production. The incorporation rates of four amino acids (leucine, tyrosine, lysine, alanine) into proteins of bacteria were measured in parallel on natural freshwater samples from the basin of the river Meuse (Belgium). Comparison of the incorporation into proteins and into the total macromolecular fraction showed that these different amino acids were incorporated at more than 90% into proteins. From incorporation measurements at four subsaturated concentrations (range, 2-77 nm), the maximum incorporation rates were determined. Strong correlations (r > 0.91 for all the calculated correlations) were found between the maximum incorporation rates of the different tested amino acids over a range of two orders of magnitude of bacterial activity. Bacterial production estimates were calculated using theoretical and experimental conversion factors. The productions calculated from the incorporation rates of the four amino acids were in good concordance, especially when the experimental conversion factors were used (slope range, 0.91-1.11, and r > 0.91). This study suggests that the incorporation of various amino acids into proteins can be used to estimate bacterial production.

  12. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  13. Development of a low-maintenance measurement approach to continuously estimate methane emissions: A case study.

    Science.gov (United States)

    Riddick, S N; Hancock, B R; Robinson, A D; Connors, S; Davies, S; Allen, G; Pitt, J; Harris, N R P

    2018-03-01

    The chemical breakdown of organic matter in landfills represents a significant source of methane gas (CH 4 ). Current estimates suggest that landfills are responsible for between 3% and 19% of global anthropogenic emissions. The net CH 4 emissions resulting from biogeochemical processes and their modulation by microbes in landfills are poorly constrained by imprecise knowledge of environmental constraints. The uncertainty in absolute CH 4 emissions from landfills is therefore considerable. This study investigates a new method to estimate the temporal variability of CH 4 emissions using meteorological and CH 4 concentration measurements downwind of a landfill site in Suffolk, UK from July to September 2014, taking advantage of the statistics that such a measurement approach offers versus shorter-term, but more complex and instantaneously accurate, flux snapshots. Methane emissions were calculated from CH 4 concentrations measured 700m from the perimeter of the landfill with observed concentrations ranging from background to 46.4ppm. Using an atmospheric dispersion model, we estimate a mean emission flux of 709μgm -2 s -1 over this period, with a maximum value of 6.21mgm -2 s -1 , reflecting the wide natural variability in biogeochemical and other environmental controls on net site emission. The emissions calculated suggest that meteorological conditions have an influence on the magnitude of CH 4 emissions. We also investigate the factors responsible for the large variability observed in the estimated CH 4 emissions, and suggest that the largest component arises from uncertainty in the spatial distribution of CH 4 emissions within the landfill area. The results determined using the low-maintenance approach discussed in this paper suggest that a network of cheaper, less precise CH 4 sensors could be used to measure a continuous CH 4 emission time series from a landfill site, something that is not practical using far-field approaches such as tracer release methods

  14. Measurement-based perturbation theory and differential equation parameter estimation with applications to satellite gravimetry

    Science.gov (United States)

    Xu, Peiliang

    2018-06-01

    The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking

  15. Estimation of the volatility distribution of organic aerosol combining thermodenuder and isothermal dilution measurements

    Science.gov (United States)

    Louvaris, Evangelos E.; Karnezi, Eleni; Kostenidou, Evangelia; Kaltsonoudis, Christos; Pandis, Spyros N.

    2017-10-01

    A method is developed following the work of Grieshop et al. (2009) for the determination of the organic aerosol (OA) volatility distribution combining thermodenuder (TD) and isothermal dilution measurements. The approach was tested in experiments that were conducted in a smog chamber using organic aerosol (OA) produced during meat charbroiling. A TD was operated at temperatures ranging from 25 to 250 °C with a 14 s centerline residence time coupled to a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) and a scanning mobility particle sizer (SMPS). In parallel, a dilution chamber filled with clean air was used to dilute isothermally the aerosol of the larger chamber by approximately a factor of 10. The OA mass fraction remaining was measured as a function of temperature in the TD and as a function of time in the isothermal dilution chamber. These two sets of measurements were used together to estimate the volatility distribution of the OA and its effective vaporization enthalpy and accommodation coefficient. In the isothermal dilution experiments approximately 20 % of the OA evaporated within 15 min. Almost all the OA evaporated in the TD at approximately 200 °C. The resulting volatility distributions suggested that around 60-75 % of the cooking OA (COA) at concentrations around 500 µg m-3 consisted of low-volatility organic compounds (LVOCs), 20-30 % of semivolatile organic compounds (SVOCs), and around 10 % of intermediate-volatility organic compounds (IVOCs). The estimated effective vaporization enthalpy of COA was 100 ± 20 kJ mol-1 and the effective accommodation coefficient was 0.06-0.07. Addition of the dilution measurements to the TD data results in a lower uncertainty of the estimated vaporization enthalpy as well as the SVOC content of the OA.

  16. Estimation of the volatility distribution of organic aerosol combining thermodenuder and isothermal dilution measurements

    Directory of Open Access Journals (Sweden)

    E. E. Louvaris

    2017-10-01

    Full Text Available A method is developed following the work of Grieshop et al. (2009 for the determination of the organic aerosol (OA volatility distribution combining thermodenuder (TD and isothermal dilution measurements. The approach was tested in experiments that were conducted in a smog chamber using organic aerosol (OA produced during meat charbroiling. A TD was operated at temperatures ranging from 25 to 250 °C with a 14 s centerline residence time coupled to a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS and a scanning mobility particle sizer (SMPS. In parallel, a dilution chamber filled with clean air was used to dilute isothermally the aerosol of the larger chamber by approximately a factor of 10. The OA mass fraction remaining was measured as a function of temperature in the TD and as a function of time in the isothermal dilution chamber. These two sets of measurements were used together to estimate the volatility distribution of the OA and its effective vaporization enthalpy and accommodation coefficient. In the isothermal dilution experiments approximately 20 % of the OA evaporated within 15 min. Almost all the OA evaporated in the TD at approximately 200 °C. The resulting volatility distributions suggested that around 60–75 % of the cooking OA (COA at concentrations around 500 µg m−3 consisted of low-volatility organic compounds (LVOCs, 20–30 % of semivolatile organic compounds (SVOCs, and around 10 % of intermediate-volatility organic compounds (IVOCs. The estimated effective vaporization enthalpy of COA was 100 ± 20 kJ mol−1 and the effective accommodation coefficient was 0.06–0.07. Addition of the dilution measurements to the TD data results in a lower uncertainty of the estimated vaporization enthalpy as well as the SVOC content of the OA.

  17. Associations between self-estimated and measured physical fitness among 40-year-old men and women.

    Science.gov (United States)

    Mikkelsson, L; Kaprio, J; Kautiainen, H; Kujala, U M; Nupponen, H

    2005-10-01

    The aim was to evaluate whether 40-year-old men and women are able to estimate their level of fitness compared with actual measured physical fitness. Twenty-nine men and 35 women first completed a questionnaire at home and then their physical fitness was measured at laboratory. The index of self-estimated physical fitness was calculated by summing up the scores of self-estimated endurance, strength, speed and flexibility. The index of self-estimated endurance was calculated by summing up the scores of self-estimated endurance and those of the self-estimated distance they could run, cycle, ski and walk. The index of measured physical fitness was calculated by summing up the z-scores of a submaximal bicycle ergometer test, ergojump tests (counter-movement jump and jumping in 15 s), a 30-s sit-up test, hand-grip tests and a sit-and-reach test. The correlation (Spearman) between the indices of self-estimated and measured physical fitness was 0.54 for both sexes, and that between self-estimated endurance and measured endurance was 0.53 for both sexes. Maximal oxygen uptake estimated based on submaximal ergometer test was higher among those with longer self-estimated distance of running, cycling, skiing and walking (P for linear trend ski or walk. However, in some individuals self-estimation of fitness is not in agreement with the results of fitness tests.

  18. A computationally inexpensive model for estimating dimensional measurement uncertainty due to x-ray computed tomography instrument misalignments

    Science.gov (United States)

    Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim

    2018-06-01

    The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.

  19. The variance of the locally measured Hubble parameter explained with different estimators

    DEFF Research Database (Denmark)

    Odderskov, Io Sandberg Hess; Hannestad, Steen; Brandbyge, Jacob

    2017-01-01

    We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N......-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend...... to obtain a smaller variance than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H0 from CMB measurements and the value measured in the local universe, these considerations are important in light...

  20. Estimating intrinsic and extrinsic noise from single-cell gene expression measurements

    Science.gov (United States)

    Fu, Audrey Qiuyan; Pachter, Lior

    2017-01-01

    Gene expression is stochastic and displays variation (“noise”) both within and between cells. Intracellular (intrinsic) variance can be distinguished from extracellular (extrinsic) variance by applying the law of total variance to data from two-reporter assays that probe expression of identically regulated gene pairs in single cells. We examine established formulas [Elowitz, M. B., A. J. Levine, E. D. Siggia and P. S. Swain (2002): “Stochastic gene expression in a single cell,” Science, 297, 1183–1186.] for the estimation of intrinsic and extrinsic noise and provide interpretations of them in terms of a hierarchical model. This allows us to derive alternative estimators that minimize bias or mean squared error. We provide a geometric interpretation of these results that clarifies the interpretation in [Elowitz, M. B., A. J. Levine, E. D. Siggia and P. S. Swain (2002): “Stochastic gene expression in a single cell,” Science, 297, 1183–1186.]. We also demonstrate through simulation and re-analysis of published data that the distribution assumptions underlying the hierarchical model have to be satisfied for the estimators to produce sensible results, which highlights the importance of normalization. PMID:27875323

  1. Estimation of Body Weight from Body Size Measurements and Body Condition Scores in Dairy Cows

    DEFF Research Database (Denmark)

    Enevoldsen, Carsten; Kristensen, T.

    1997-01-01

    , and body condition score were consistently associated with BW. The coefficients of multiple determination varied from 80 to 89%. The number of significant terms and the parameter estimates of the models differed markedly among groups of cows. Apparently, these differences were due to breed and feeding...... regimen. Results from this study indicate that a reliable model for estimating BW of very different dairy cows maintained in a wide range of environments can be developed using body condition score, demographic information, and measurements of hip height and hip width. However, for management purposes......The objective of this study was to evaluate the use of hip height and width, body condition score, and relevant demographic information to predict body weight (BW) of dairy cows. Seven regression models were developed from data from 972 observations of 554 cows. Parity, hip height, hip width...

  2. Comparison of Two Methods for Estimation of Work Limitation Scores from Health Status Measures

    DEFF Research Database (Denmark)

    Anatchkova, M; Fang, H; Kini, N

    2015-01-01

    Objectives To compare two methods for estimation of Work Limitations Questionnaire scores (WLQ, 8 items) from the Role Physical (RP, 4 items) and Role Emotional scales (RE, 3 items) of the SF-36 Health survey. These measures assess limitations in role performance attributed to health (emotional...... future data collection strategies. Methods We used data from two independent cross-sectional panel samples (Sample1, n=1382, 51% female, 72% Caucasian, 49% with preselected chronic conditions, 15% with fair/poor health; Sample2, n=301, 45% female, 90% Caucasian, 47% with preselected chronic conditions......, 21% with fair/poor health). Method 1 used previously developed and validated IRT based calibration tables. Method 2 used regression models to develop aggregate imputation weights as described in the literature. We evaluated the agreement of observed and estimated WLQ scale scores from the two methods...

  3. Measurement and valuation of health providers' time for the management of childhood pneumonia in rural Malawi: an empirical study.

    Science.gov (United States)

    Bozzani, Fiammetta Maria; Arnold, Matthias; Colbourn, Timothy; Lufesi, Norman; Nambiar, Bejoy; Masache, Gibson; Skordis-Worrall, Jolene

    2016-07-28

    Human resources are a major cost driver in childhood pneumonia case management. Introduction of 13-valent pneumococcal conjugate vaccine (PCV-13) in Malawi can lead to savings on staff time and salaries due to reductions in pneumonia cases requiring admission. Reliable estimates of human resource costs are vital for use in economic evaluations of PCV-13 introduction. Twenty-eight severe and twenty-four very severe pneumonia inpatients under the age of five were tracked from admission to discharge by paediatric ward staff using self-administered timesheets at Mchinji District Hospital between June and August 2012. All activities performed and the time spent on each activity were recorded. A monetary value was assigned to the time by allocating a corresponding percentage of the health workers' salary. All costs are reported in 2012 US$. A total of 1,017 entries, grouped according to 22 different activity labels, were recorded during the observation period. On average, 99 min (standard deviation, SD = 46) were spent on each admission: 93 (SD = 38) for severe and 106 (SD = 55) for very severe cases. Approximately 40 % of activities involved monitoring and stabilization, including administering non-drug therapies such as oxygen. A further 35 % of the time was spent on injecting antibiotics. Nurses provided 60 % of the total time spent on pneumonia admissions, clinicians 25 % and support staff 15 %. Human resource costs were approximately US$ 2 per bed-day and, on average, US$ 29.5 per severe pneumonia admission and US$ 37.7 per very severe admission. Self-reporting was successfully used in this context to generate reliable estimates of human resource time and costs of childhood pneumonia treatment. Assuming vaccine efficacy of 41 % and 90 % coverage, PCV-13 introduction in Malawi can save over US$ 2 million per year in staff costs alone.

  4. Estimation of hydraulic conductivities of Yucca Mountain tuffs from sorptivity and water retention measurements

    International Nuclear Information System (INIS)

    Zimmerman, R.W.; Bodvarsson, G.S.

    1995-06-01

    The hydraulic conductivity functions of the matrix rocks at Yucca Mountain, Nevada, are among the most important data needed as input for the site-scale hydrological model of the unsaturated zone. The difficult and time-consuming nature of hydraulic conductivity measurements renders it infeasible to directly measure this property on large numbers of cores. Water retention and sorptivity measurements, however, can be made relatively rapidly. The sorptivity is, in principle, a unique functional of the conductivity and water retention functions. It therefore should be possible to invert sorptivity and water retention measurements in order to estimate the conductivity; the porosity is the only other parameter that is required for this inversion. In this report two methods of carrying out this inversion are presented, and are tested against a limited data set that has been collected by Flint et al. at the USGS on a set of Yucca Mountain tuffs. The absolute permeability is usually predicted by both methods to within an average error of about 0.5 - 1.0 orders of magnitude. The discrepancy appears to be due to the fact that the water retention curves have only been measured during drainage, whereas the imbibition water retention curve is the one that is relevant to sorptivity measurements. Although the inversion methods also yield predictions of the relative permeability function, there are yet no unsaturated hydraulic conductivity data against which to test these predictions

  5. Negative control exposure studies in the presence of measurement error: implications for attempted effect estimate calibration.

    Science.gov (United States)

    Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George

    2018-04-01

    Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.

  6. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    International Nuclear Information System (INIS)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-01-01

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  7. A new formula for estimation of standard liver volume using computed tomography-measured body thickness.

    Science.gov (United States)

    Ma, Ka Wing; Chok, Kenneth S H; Chan, Albert C Y; Tam, Henry S C; Dai, Wing Chiu; Cheung, Tan To; Fung, James Y Y; Lo, Chung Mau

    2017-09-01

    The objective of this article is to derive a more accurate and easy-to-use formula for finding estimated standard liver volume (ESLV) using novel computed tomography (CT) measurement parameters. New formulas for ESLV have been emerging that aim to improve the accuracy of estimation. However, many of these formulas contain body surface area measurements and logarithms in the equations that lead to a more complicated calculation. In addition, substantial errors in ESLV using these old formulas have been shown. An improved version of the formula for ESLV is needed. This is a retrospective cohort of consecutive living donor liver transplantations from 2005 to 2016. Donors were randomly assigned to either the formula derivation or validation groups. Total liver volume (TLV) measured by CT was used as the reference for a linear regression analysis against various patient factors. The derived formula was compared with the existing formulas. There were 722 patients (197 from the derivation group, 164 from the validation group, and 361 from the recipient group) involved in the study. The donor's body weight (odds ratio [OR], 10.42; 95% confidence interval [CI], 7.25-13.60; P Liver Transplantation 23 1113-1122 2017 AASLD. © 2017 by the American Association for the Study of Liver Diseases.

  8. Estimation of lean and fat composition of pork ham using image processing measurements

    Science.gov (United States)

    Jia, Jiancheng; Schinckel, Allan P.; Forrest, John C.

    1995-01-01

    This paper presents a method of estimating the lean and fat composition in pork ham from cross-sectional area measurements using image processing technology. The relationship between the quantity of ham lean and fat mass with the ham lean and fat areas was studied. The prediction equations for pork ham composition based on the ham cross-sectional area measurements were developed. The results show that ham lean weight was related to the ham lean area (r equals .75, P lean weight was highly related to the product of ham total weight times percentage ham lean area (r equals .96, P product of ham total weight times percentage ham fat area (r equals .88, P lean weight was trimmed wholesale ham weight and percentage ham fat area with a coefficient of determination of 92%. The best combination of independent variables for estimating ham fat weight was trimmed wholesale ham weight and percentage ham fat area with a coefficient of determination of 78%. Prediction equations with either two or three independent variables did not significantly increase the accuracy of prediction. The results of this study indicate that the weight of ham lean and fat could be predicted from ham cross-sectional area measurements using image analysis in combination with wholesale ham weight.

  9. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    Science.gov (United States)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-07-01

    An extension of the point kinetics model is developed to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. The spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  10. A Mixed WLS Power System State Estimation Method Integrating a Wide-Area Measurement System and SCADA Technology

    Directory of Open Access Journals (Sweden)

    Tao Jin

    2018-02-01

    Full Text Available To address the issue that the phasor measurement units (PMUs of wide area measurement system (WAMS are not sufficient for static state estimation in most existing power systems, this paper proposes a mixed power system weighted least squares (WLS state estimation method integrating a wide-area measurement system and supervisory control and data acquisition (SCADA technology. The hybrid calculation model is established by incorporating phasor measurements (including the node voltage phasors and branch current phasors and the results of the traditional state estimator in a post-processing estimator. The performance assessment is discussed through setting up mathematical models of the distribution network. Based on PMU placement optimization and bias analysis, the effectiveness of the proposed method was proved to be accurate and reliable by simulations of different cases. Furthermore, emulating calculation shows this method greatly improves the accuracy and stability of the state estimation solution, compared with the traditional WLS state estimation.

  11. Estimating the dynamics of groundwater input into the coastal zone via continuous radon-222 measurements

    International Nuclear Information System (INIS)

    Burnett, William C.; Dulaiova, Henrieta

    2003-01-01

    Submarine groundwater discharge (SGD) into the coastal zone has received increased attention in the last few years as it is now recognized that this process represents an important pathway for material transport. Assessing these material fluxes is difficult, as there is no simple means to gauge the water flux. To meet this challenge, we have explored the use of a continuous radon monitor to measure radon concentrations in coastal zone waters over time periods from hours to days. Changes in the radon inventories over time can be converted to fluxes after one makes allowances for tidal effects, losses to the atmosphere, and mixing with offshore waters. If one assumes that advective flow of radon-enriched groundwater (pore waters) represent the main input of 222 Rn in the coastal zone, the calculated radon fluxes may be converted to water fluxes by dividing by the estimated or measured 222 Rn pore water activity. We have also used short-lived radium isotopes ( 223 Ra and 224 Ra) to assess mixing between near-shore and offshore waters in the manner pioneered by . During an experiment in the coastal Gulf of Mexico, we showed that the mixing loss derived from the 223 Ra gradient agreed very favorably to the estimated range based on the calculated radon fluxes. This allowed an independent constraint on the mixing loss of radon--an important parameter in the mass balance approach. Groundwater discharge was also estimated independently by the radium isotopic approach and was within a factor of two of that determined by the continuous radon measurements and an automated seepage meter deployed at the same site

  12. A diode laser-based velocimeter providing point measurements in unseeded flows using modulated filtered Rayleigh scattering (MFRS)

    Science.gov (United States)

    Jagodzinski, Jeremy James

    2007-12-01

    The development to date of a diode-laser based velocimeter providing point-velocity-measurements in unseeded flows using molecular Rayleigh scattering is discussed. The velocimeter is based on modulated filtered Rayleigh scattering (MFRS), a novel variation of filtered Rayleigh scattering (FRS), utilizing modulated absorption spectroscopy techniques to detect a strong absorption of a relatively weak Rayleigh scattered signal. A rubidium (Rb) vapor filter is used to provide the relatively strong absorption; alkali metal vapors have a high optical depth at modest vapor pressures, and their narrow linewidth is ideally suited for high-resolution velocimetry. Semiconductor diode lasers are used to generate the relatively weak Rayleigh scattered signal; due to their compact, rugged construction diode lasers are ideally suited for the environmental extremes encountered in many experiments. The MFRS technique utilizes the frequency-tuning capability of diode lasers to implement a homodyne detection scheme using lock-in amplifiers. The optical frequency of the diode-based laser system used to interrogate the flow is rapidly modulated about a reference frequency in the D2-line of Rb. The frequency modulation is imposed on the Rayleigh scattered light that is collected from the probe volume in the flow under investigation. The collected frequency modulating Rayleigh scattered light is transmitted through a Rb vapor filter before being detected. The detected modulated absorption signal is fed to two lock-in amplifers synchronized with the modulation frequency of the source laser. High levels of background rejection are attained since the lock-ins are both frequency and phase selective. The two lock-in amplifiers extract different Fourier components of the detected modulated absorption signal, which are ratioed to provide an intensity normalized frequency dependent signal from a single detector. A Doppler frequency shift in the collected Rayleigh scattered light due to a change

  13. Measuring and managing the work environment of the mid-level provider – the neglected human resource

    Directory of Open Access Journals (Sweden)

    McAuliffe Eilish

    2009-02-01

    Full Text Available Abstract Background Much has been written in the past decade about the health workforce crisis that is crippling health service delivery in many middle-income and low-income countries. Countries having lost most of their highly qualified health care professionals to migration increasingly rely on mid-level providers as the mainstay for health services delivery. Mid-level providers are health workers who perform tasks conventionally associated with more highly trained and internationally mobile workers. Their training usually has lower entry requirements and is for shorter periods (usually two to four years. Our study aimed to explore a neglected but crucial aspect of human resources for health in Africa: the provision of a work environment that will promote motivation and performance of mid-level providers. This paper explores the work environment of mid-level providers in Malawi, and contributes to the validation of an instrument to measure the work environment of mid-level providers in low-income countries. Methods Three districts were purposively sampled from each of the three geographical regions in Malawi. A total of 34 health facilities from the three districts were included in the study. All staff in each of the facilities were included in the sampling frame. A total of 153 staff members consented to be interviewed. Participants completed measures of perceptions of work environment, burnout and job satisfaction. Findings The Healthcare Provider Work Index, derived through Principal Components Analysis and Rasch Analysis of our modification of an existing questionnaire, constituted four subscales, measuring: (1 levels of staffing and resources; (2 management support; (3 workplace relationships; and (4 control over practice. Multivariate analysis indicated that scores on the Work Index significantly predicted key variables concerning motivation and attrition such as emotional exhaustion, job satisfaction, satisfaction with the profession

  14. The assessment of Global Precipitation Measurement estimates over the Indian subcontinent

    Science.gov (United States)

    Murali Krishna, U. V.; Das, Subrata Kumar; Deshpande, Sachin M.; Doiphode, S. L.; Pandithurai, G.

    2017-08-01

    Accurate and real-time precipitation estimation is a challenging task for current and future spaceborne measurements, which is essential to understand the global hydrological cycle. Recently, the Global Precipitation Measurement (GPM) satellites were launched as a next-generation rainfall mission for observing the global precipitation characteristics. The purpose of the GPM is to enhance the spatiotemporal resolution of global precipitation. The main objective of the present study is to assess the rainfall products from the GPM, especially the Integrated Multi-satellitE Retrievals for the GPM (IMERG) data by comparing with the ground-based observations. The multitemporal scale evaluations of rainfall involving subdaily, diurnal, monthly, and seasonal scales were performed over the Indian subcontinent. The comparison shows that the IMERG performed better than the Tropical Rainfall Measuring Mission (TRMM)-3B42, although both rainfall products underestimated the observed rainfall compared to the ground-based measurements. The analyses also reveal that the TRMM-3B42 and IMERG data sets are able to represent the large-scale monsoon rainfall spatial features but are having region-specific biases. The IMERG shows significant improvement in low rainfall estimates compared to the TRMM-3B42 for selected regions. In the spatial distribution, the IMERG shows higher rain rates compared to the TRMM-3B42, due to its enhanced spatial and temporal resolutions. Apart from this, the characteristics of raindrop size distribution (DSD) obtained from the GPM mission dual-frequency precipitation radar is assessed over the complex mountain terrain site in the Western Ghats, India, using the DSD measured by a Joss-Waldvogel disdrometer.