WorldWideScience

Sample records for survey estimated means

  1. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  2. Statistical properties of mean stand biomass estimators in a LIDAR-based double sampling forest survey design.

    Science.gov (United States)

    H.E. Anderson; J. Breidenbach

    2007-01-01

    Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...

  3. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  4. Study on method of dose estimation for the Dual-moderated neutron survey meter

    International Nuclear Information System (INIS)

    Zhou, Bo; Li, Taosheng; Xu, Yuhai; Gong, Cunkui; Yan, Qiang; Li, Lei

    2013-01-01

    In order to study neutron dose measurement in high energy radiation field, a Dual-moderated survey meter in the range from 1 keV to 300 MeV mean energies spectra has been developed. Measurement results of some survey meters depend on the neutron spectra characteristics in different neutron radiation fields, so the characteristics of the responses to various neutron spectra should be studied in order to get more reasonable dose. In this paper the responses of the survey meter were calculated under different neutron spectra data from IAEA of Technical Reports Series No. 318 and other references. Finally one dose estimation method was determined. The range of the reading per H*(10) for the method estimated is about 0.7–1.6 for the neutron mean energy range from 50 keV to 300 MeV. -- Highlights: • We studied a novel high energy neutron survey meter. • Response characteristics of the survey meter were calculated by using a series of neutron spectra. • One significant advantage of the survey meter is that it can provide mean energy of radiation field. • Dose estimate deviation can be corrected. • The range of corrected reading per H*(10) is about 0.7–1.6 for the neutron fluence mean energy range from 0.05 MeV to 300 MeV

  5. System for estimation of mean active bone marrow dose

    International Nuclear Information System (INIS)

    Ellis, R.E.; Healy, M.J.R.; Shleien, B.; Tucker, T.

    1975-09-01

    The exposure measurements, model and computer program for estimation of mean active bone marrow doses formerly employed in the 1962 British Survey of x-ray doses and proposed for application to x-ray exposure information obtained in the U.S. Public Health Service's X-Ray Exposure Studies (1966 and 1973) are described and evaluated. The method described is feasible for use to determine the mean active bone marrow doses to adults for examinations having a skin to source distance of 80 cm or less. For a greater SSD, as for example in chest x rays, a small correction in the calculation dose can be made

  6. Bayesian Simultaneous Estimation for Means in k Sample Problems

    OpenAIRE

    Imai, Ryo; Kubokawa, Tatsuya; Ghosh, Malay

    2017-01-01

    This paper is concerned with the simultaneous estimation of k population means when one suspects that the k means are nearly equal. As an alternative to the preliminary test estimator based on the test statistics for testing hypothesis of equal means, we derive Bayesian and minimax estimators which shrink individual sample means toward a pooled mean estimator given under the hypothesis. Interestingly, it is shown that both the preliminary test estimator and the Bayesian minimax shrinkage esti...

  7. Mean density and two-point correlation function for the CfA redshift survey slices

    International Nuclear Information System (INIS)

    De Lapparent, V.; Geller, M.J.; Huchra, J.P.

    1988-01-01

    The effect of large-scale inhomogeneities on the determination of the mean number density and the two-point spatial correlation function were investigated for two complete slices of the extension of the Center for Astrophysics (CfA) redshift survey (de Lapparent et al., 1986). It was found that the mean galaxy number density for the two strips is uncertain by 25 percent, more so than previously estimated. The large uncertainty in the mean density introduces substantial uncertainty in the determination of the two-point correlation function, particularly at large scale; thus, for the 12-deg slice of the CfA redshift survey, the amplitude of the correlation function at intermediate scales is uncertain by a factor of 2. The large uncertainties in the correlation functions might reflect the lack of a fair sample. 45 references

  8. Variable selection and estimation for longitudinal survey data

    KAUST Repository

    Wang, Li

    2014-09-01

    There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships or to identify significant variables in regression settings. This paper develops a general strategy for the model selection problem in longitudinal sample surveys. A survey weighted penalized estimating equation approach is proposed to select significant variables and estimate the coefficients simultaneously. The proposed estimators are design consistent and perform as well as the oracle procedure when the correct submodel was known. The estimating function bootstrap is applied to obtain the standard errors of the estimated parameters with good accuracy. A fast and efficient variable selection algorithm is developed to identify significant variables for complex longitudinal survey data. Simulated examples are illustrated to show the usefulness of the proposed methodology under various model settings and sampling designs. © 2014 Elsevier Inc.

  9. A survey on OFDM channel estimation techniques based on denoising strategies

    Directory of Open Access Journals (Sweden)

    Pallaviram Sure

    2017-04-01

    Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.

  10. Estimation of undernutrition and mean calorie intake in Africa: methodology, findings and implications.

    Science.gov (United States)

    van Wesenbeeck, Cornelia F A; Keyzer, Michiel A; Nubé, Maarten

    2009-06-27

    As poverty and hunger are basic yardsticks of underdevelopment and destitution, the need for reliable statistics in this domain is self-evident. While the measurement of poverty through surveys is relatively well documented in the literature, for hunger, information is much scarcer, particularly for adults, and very different methodologies are applied for children and adults. Our paper seeks to improve on this practice in two ways. One is that we estimate the prevalence of undernutrition in sub-Saharan Africa (SSA) for both children and adults based on anthropometric data available at province or district level, and secondly, we estimate the mean calorie intake and implied calorie gap for SSA, also using anthropometric data on the same geographical aggregation level. Our main results are, first, that we find a much lower prevalence of hunger than presented in the Millennium Development reports (17.3% against 27.8% for the continent as a whole). Secondly, we find that there is much less spread in mean calorie intake across the continent than reported by the Food and Agricultural Organization (FAO) in the State of Food and Agriculture, 2007, the only estimate that covers the whole of Africa. While FAO estimates for calorie availability vary from a low of 1760 Kcal/capita/day for Central Africa to a high of 2825 Kcal/capita/day for Southern Africa, our estimates lay in a range of 2245 Kcal/capita/day (Eastern Africa) to 2618 Kcal/capita/day for Southern Africa. Thirdly, we validate the main data sources used (the Demographic and Health Surveys) by comparing them over time and with other available data sources for various countries. We conclude that the picture of Africa that emerges from anthropometric data is much less negative than that usually presented. Especially for Eastern and Central Africa, the nutritional status is less critical than commonly assumed and also mean calorie intake is higher, which implies that agricultural production and hence income must also

  11. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights.

    Directory of Open Access Journals (Sweden)

    Patrick Habecker

    Full Text Available Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations via traditional survey tools such as telephone or mail surveys--by asking a representative sample to estimate the number of people they know who are members of such a "hidden" subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation "trimming" to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.

  12. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  13. Estimation of unknown nuclear masses by means of the generalized mass relations. Pt. 3

    International Nuclear Information System (INIS)

    Popa, S.M.

    1980-01-01

    A survey of the estimations of the unknown nuclear masses by means of the generalized mass relations is presented. One discusses the new hypotheses supplementing the original general Garvey-Kelson scheme, reviewing the generalized mass relations and formulae, according to the present status of this new formalism. A critical discussions is given of the reliability of these new Garvey-Kelson type extrapolation procedures. (author)

  14. Estimating recreational harvest using interview-based recall survey: Implication of recalling in weight or numbers

    DEFF Research Database (Denmark)

    Sparrevohn, Claus Reedtz

    2013-01-01

    on interviewed-based surveys where fishers are asked to recall harvest within a given timeframe. However, the importance of whether fishers are requested to provide figures in weight or number is unresolved. Therefore, a recall survey aiming at estimating recreational harvest was designed, such that respondents...... could report harvest using either weight or numbers. It was found that: (1) a preference for reporting in numbers dominated; (2) reported mean individual weight of fish caught, differed between units preferences; and (3) when an estimate of total harvest in weight are calculated, these difference could...

  15. Estimation of undernutrition and mean calorie intake in Africa: methodology, findings and implications

    Directory of Open Access Journals (Sweden)

    Nubé Maarten

    2009-06-01

    Full Text Available Abstract Background As poverty and hunger are basic yardsticks of underdevelopment and destitution, the need for reliable statistics in this domain is self-evident. While the measurement of poverty through surveys is relatively well documented in the literature, for hunger, information is much scarcer, particularly for adults, and very different methodologies are applied for children and adults. Our paper seeks to improve on this practice in two ways. One is that we estimate the prevalence of undernutrition in sub-Saharan Africa (SSA for both children and adults based on anthropometric data available at province or district level, and secondly, we estimate the mean calorie intake and implied calorie gap for SSA, also using anthropometric data on the same geographical aggregation level. Results Our main results are, first, that we find a much lower prevalence of hunger than presented in the Millennium Development reports (17.3% against 27.8% for the continent as a whole. Secondly, we find that there is much less spread in mean calorie intake across the continent than reported by the Food and Agricultural Organization (FAO in the State of Food and Agriculture, 2007, the only estimate that covers the whole of Africa. While FAO estimates for calorie availability vary from a low of 1760 Kcal/capita/day for Central Africa to a high of 2825 Kcal/capita/day for Southern Africa, our estimates lay in a range of 2245 Kcal/capita/day (Eastern Africa to 2618 Kcal/capita/day for Southern Africa. Thirdly, we validate the main data sources used (the Demographic and Health Surveys by comparing them over time and with other available data sources for various countries. Conclusion We conclude that the picture of Africa that emerges from anthropometric data is much less negative than that usually presented. Especially for Eastern and Central Africa, the nutritional status is less critical than commonly assumed and also mean calorie intake is higher, which implies

  16. Stereological estimation of nuclear mean volume in invasive meningiomas

    DEFF Research Database (Denmark)

    Madsen, C; Schrøder, H D

    1996-01-01

    A stereological estimation of nuclear mean volume in bone and brain invasive meningiomas was made. For comparison the nuclear mean volume of benign meningiomas was estimated. The aim was to investigate whether this method could discriminate between these groups. We found that the nuclear mean...... volume in the bone and brain invasive meningiomas was larger than in the benign tumors. The difference was significant and moreover it was seen that there was no overlap between the two groups. In the bone invasive meningiomas the nuclear mean volume appeared to be larger inside than outside the bone....... No significant difference in nuclear mean volume was found between brain and bone invasive meningiomas. The results demonstrate that invasive meningiomas differ from benign meningiomas by an objective stereological estimation of nuclear mean volume (p

  17. Estimation of total catch of silver kob Argyrosomus inodorus by recreational shore-anglers in Namibia using a roving-roving creel survey

    DEFF Research Database (Denmark)

    Kirchner, C.H.; Beyer, Jan

    1999-01-01

    , using data taken during a survey from 1 October 1995 to 30 September 1996. Two different methods of estimating daily catch were tested by sampling the same population of anglers using a complete and an incomplete survey. The mean rate estimator, calculated by the ratio of the means with progressive......A statistical sampling method is described to estimate the annual catch of silver kob Agryrosomus indorus by recreational shore-anglers in Namibia. The method is based on the theory of progressive counts and on-site roving interviews of anglers, with catch counts and measurements at interception...

  18. Estimation of unaltered daily mean streamflow at ungaged streams of New York, excluding Long Island, water years 1961-2010

    Science.gov (United States)

    Gazoorian, Christopher L.

    2015-01-01

    The lakes, rivers, and streams of New York State provide an essential water resource for the State. The information provided by time series hydrologic data is essential to understanding ways to promote healthy instream ecology and to strengthen the scientific basis for sound water management decision making in New York. The U.S. Geological Survey, in cooperation with The Nature Conservancy and the New York State Energy Research and Development Authority, has developed the New York Streamflow Estimation Tool to estimate a daily mean hydrograph for the period from October 1, 1960, to September 30, 2010, at ungaged locations across the State. The New York Streamflow Estimation Tool produces a complete estimated daily mean time series from which daily flow statistics can be estimated. In addition, the New York Streamflow Estimation Tool provides a means for quantitative flow assessments at ungaged locations that can be used to address the objectives of the Clean Water Act—to restore and maintain the chemical, physical, and biological integrity of the Nation’s waters.

  19. DEEBAR - A BASIC interactive computer programme for estimating mean resonance spacings

    International Nuclear Information System (INIS)

    Booth, M.; Pope, A.L.; Smith, R.W.; Story, J.S.

    1988-02-01

    DEEBAR is a BASIC interactive programme, which uses the theories of Dyson and of Dyson and Mehta, to compute estimates of the mean resonance spacings and associated uncertainty statistics from an input file of neutron resonance energies. In applying these theories the broad scale energy dependence of D-bar, as predicted by the ordinary theory of level densities, is taken into account. The mean spacing D-bar ± δD-bar, referred to zero energy of the incident neutrons, is computed from the energies of the first k resonances, for k = 2,3...K in turn and as if no resonances are missing. The user is asked to survey this set of D-bar and δD-bar values and to form a judgement - up to what value of k is the set of resonances complete and what value, in consequence, does the user adopt as the preferred value of D-bar? When the preferred values for k and D-bar have been input, the programme calculates revised values for the level density parameters, consistent with this value for D-bar and with other input information. Two short tables are printed, illustrating the energy variation and spin dependence of D-bar. Dyson's formula based on his Coulomb gas analogy is used for estimating the most likely energies of the topmost bound levels. Finally the quasi-crystalline character of a single level series is exploited by means of a table in which the resonance energies are set alongside an energy ladder whose rungs are regularly spaced with spacing D-bar(E); this comparative table expedites the search for gaps where resonances may have been missed experimentally. Used in conjunction with the program LJPROB, which calculates neutron strengths and compares them against the expected Porter Thomas distribution, estimates of the statistical parameters for use in the unresolved resonance region may be derived. (author)

  20. The influence of survey duration on estimates of food intakes and its relevance for public health nutrition and food safety issues.

    Science.gov (United States)

    Lambe, J; Kearney, J; Leclercq, C; Zunft, H F; De Henauw, S; Lamberg-Allardt, C J; Dunne, A; Gibney, M J

    2000-02-01

    To examine the influence of food consumption survey duration on estimates of percentage consumers, mean total population intakes and intakes among consumers only and to consider its relevance for public health nutrition and food safety issues. Prospective food consumption survey. A multicentre study in five centres in the European Union-Dublin, Ghent, Helsinki, Potsdam and Rome. Teenage subjects were recruited through schools; 948 (80%) out of 1180 subjects completed the survey. 14-day food diaries were used to collect the food consumption data. For mean total population intakes, 53% of the foods had slopes significantly different to 0 (Pday), these differences were small, with 41% of foods having differences of day and a further 35% having differences of 1-5 g/day. Estimates of percentage consumers based on 3 days and 14 days were 1.9 and 3.6 times the 1-day estimate, respectively. For 72% of foods, at least 50% of non-consumers on day 1 became consumers over the subsequent 13 days. Estimates of mean consumer only intakes based on 3 days and 14 days were 53% and 32% of the 1 day value. In practical terms, survey duration influences estimates of percentage consumers and intakes among consumers only but not mean total population intakes. Awareness of this influence is important for improved interpretation of dietary data for epidemiological studies, development of food-based dietary guidelines and food chemical intakes. The Institute of European Food Studies, a non-profit research organization based in Trinity College Dublin. European Journal of Clinical Nutrition (2000) 54, 166-173

  1. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Stated Preference Survey Estimating the Willingness to Pay ...

    Science.gov (United States)

    A national stated preference survey designed to elicit household willingness to pay for reductions in impinged and entrained fish at cooling water intake structures. To improve estimation of environmental benefits estimation

  3. Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.

    Science.gov (United States)

    Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya

    2018-05-05

    This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.

  4. A modified procedure for estimating the population mean in two ...

    African Journals Online (AJOL)

    A modified procedure for estimating the population mean in two-occasion successive samplings. Housila Prasad Singh, Suryal Kant Pal. Abstract. This paper addresses the problem of estimating the current population mean in two occasion successive sampling. Utilizing the readily available information on two auxiliary ...

  5. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  6. Aerial Survey as a Tool to Estimate Abundance and Describe Distribution of a Carcharhinid Species, the Lemon Shark, Negaprion brevirostris

    Directory of Open Access Journals (Sweden)

    S. T. Kessel

    2013-01-01

    Full Text Available Aerial survey provides an important tool to assess the abundance of both terrestrial and marine vertebrates. To date, limited work has tested the effectiveness of this technique to estimate the abundance of smaller shark species. In Bimini, Bahamas, the lemon shark (Negaprion brevirostris shows high site fidelity to a shallow sandy lagoon, providing an ideal test species to determine the effectiveness of localised aerial survey techniques for a Carcharhinid species in shallow subtropical waters. Between September 2007 and September 2008, visual surveys were conducted from light aircraft following defined transects ranging in length between 8.8 and 4.4 km. Count results were corrected for “availability”, “perception”, and “survey intensity” to provide unbiased abundance estimates. The abundance of lemon sharks was greatest in the central area of the lagoon during high tide, with a change in abundance distribution to the east and western regions of the lagoon with low tide. Mean abundance of sharks was estimated at 49 (±8.6 individuals, and monthly abundance was significantly positively correlated with mean water temperature. The successful implementation of the aerial survey technique highlighted the potential of further employment for shark abundance assessments in shallow coastal marine environments.

  7. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  8. Estimation and correction of visibility bias in aerial surveys of wintering ducks

    Science.gov (United States)

    Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.

    2008-01-01

    Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.

  9. Population-based absolute risk estimation with survey data

    Science.gov (United States)

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  10. Comparative Study of Complex Survey Estimation Software in ONS

    Directory of Open Access Journals (Sweden)

    Andy Fallows

    2015-09-01

    Full Text Available Many official statistics across the UK Government Statistical Service (GSS are produced using data collected from sample surveys. These survey data are used to estimate population statistics through weighting and calibration techniques. For surveys with complex or unusual sample designs, the weighting can be fairly complicated. Even in more simple cases, appropriate software is required to implement survey weighting and estimation. As with other stages of the survey process, it is preferable to use a standard, generic calibration tool wherever possible. Standard tools allow for efficient use of resources and assist with the harmonisation of methods. In the case of calibration, the Office for National Statistics (ONS has experience of using the Statistics Canada Generalized Estimation System (GES across a range of business and social surveys. GES is a SAS-based system and so is only available in conjunction with an appropriate SAS licence. Given recent initiatives and encouragement to investigate open source solutions across government, it is appropriate to determine whether there are any open source calibration tools available that can provide the same service as GES. This study compares the use of GES with the calibration tool ‘R evolved Generalized software for sampling estimates and errors in surveys’ (ReGenesees available in R, an open source statistical programming language which is beginning to be used in many statistical offices. ReGenesees is a free R package which has been developed by the Italian statistics office (Istat and includes functionality to calibrate survey estimates using similar techniques to GES. This report describes analysis of the performance of ReGenesees in comparison to GES to calibrate a representative selection of ONS surveys. Section 1.1 provides a brief introduction to the current use of SAS and R in ONS. Section 2 describes GES and ReGenesees in more detail. Sections 3.1 and 3.2 consider methods for

  11. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    Science.gov (United States)

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  12. A NEW MODIFIED RATIO ESTIMATOR FOR ESTIMATION OF POPULATION MEAN WHEN MEDIAN OF THE AUXILIARY VARIABLE IS KNOWN

    Directory of Open Access Journals (Sweden)

    Jambulingam Subramani

    2013-10-01

    Full Text Available The present paper deals with a modified ratio estimator for estimation of population mean of the study variable when the population median of the auxiliary variable is known. The bias and mean squared error of the proposed estimator are derived and are compared with that of existing modified ratio estimators for certain known populations. Further we have also derived the conditions for which the proposed estimator performs better than the existing modified ratio estimators. From the numerical study it is also observed that the proposed modified ratio estimator performs better than the existing modified ratio estimators for certain known populations.

  13. Comparison of administrative and survey data for estimating vitamin A supplementation and deworming coverage of children under five years of age in Sub-Saharan Africa.

    Science.gov (United States)

    Janmohamed, Amynah; Doledec, David

    2017-07-01

    To compare administrative coverage data with results from household coverage surveys for vitamin A supplementation (VAS) and deworming campaigns conducted during 2010-2015 in 12 African countries. Paired t-tests examined differences between administrative and survey coverage for 52 VAS and 34 deworming dyads. Independent t-tests measured VAS and deworming coverage differences between data sources for door-to-door and fixed-site delivery strategies and VAS coverage differences between 6- to 11-month and 12- to 59-month age group. For VAS, administrative coverage was higher than survey estimates in 47 of 52 (90%) campaign rounds, with a mean difference of 16.1% (95% CI: 9.5-22.7; P < 0.001). For deworming, administrative coverage exceeded survey estimates in 31 of 34 (91%) comparisons, with a mean difference of 29.8% (95% CI: 16.9-42.6; P < 0.001). Mean ± SD differences in coverage between administrative and survey data were 12.2% ± 22.5% for the door-to-door delivery strategy and 25.9% ± 24.7% for the fixed-site model (P = 0.06). For deworming, mean ± SD differences in coverage between data sources were 28.1% ± 43.5% and 33.1% ± 17.9% for door-to-door and fixed-site distribution, respectively (P = 0.64). VAS administrative coverage was higher than survey estimates in 37 of 49 (76%) comparisons for the 6- to 11-month age group and 45 of 48 (94%) comparisons for the 12- to 59-month age group. Reliance on health facility data alone for calculating VAS and deworming coverage may mask low coverage and prevent measures to improve programmes. Countries should periodically validate administrative coverage estimates with population-based methods. © 2017 John Wiley & Sons Ltd.

  14. Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey

    Directory of Open Access Journals (Sweden)

    Abdelrahman Osman Elfaki

    2014-01-01

    Full Text Available Cost estimation is the most important preliminary process in any construction project. Therefore, construction cost estimation has the lion’s share of the research effort in construction management. In this paper, we have analysed and studied proposals for construction cost estimation for the last 10 years. To implement this survey, we have proposed and applied a methodology that consists of two parts. The first part concerns data collection, for which we have chosen special journals as sources for the surveyed proposals. The second part concerns the analysis of the proposals. To analyse each proposal, the following four questions have been set. Which intelligent technique is used? How have data been collected? How are the results validated? And which construction cost estimation factors have been used? From the results of this survey, two main contributions have been produced. The first contribution is the defining of the research gap in this area, which has not been fully covered by previous proposals of construction cost estimation. The second contribution of this survey is the proposal and highlighting of future directions for forthcoming proposals, aimed ultimately at finding the optimal construction cost estimation. Moreover, we consider the second part of our methodology as one of our contributions in this paper. This methodology has been proposed as a standard benchmark for construction cost estimation proposals.

  15. Results and evaluation of a survey to estimate Pacific walrus population size, 2006

    Science.gov (United States)

    Speckman, Suzann G.; Chernook, Vladimir I.; Burn, Douglas M.; Udevitz, Mark S.; Kochnev, Anatoly A.; Vasilev, Alexander; Jay, Chadwick V.; Lisovsky, Alexander; Fischbach, Anthony S.; Benter, R. Bradley

    2011-01-01

    In spring 2006, we conducted a collaborative U.S.-Russia survey to estimate abundance of the Pacific walrus (Odobenus rosmarus divergens). The Bering Sea was partitioned into survey blocks, and a systematic random sample of transects within a subset of the blocks was surveyed with airborne thermal scanners using standard strip-transect methodology. Counts of walruses in photographed groups were used to model the relation between thermal signatures and the number of walruses in groups, which was used to estimate the number of walruses in groups that were detected by the scanner but not photographed. We also modeled the probability of thermally detecting various-sized walrus groups to estimate the number of walruses in groups undetected by the scanner. We used data from radio-tagged walruses to adjust on-ice estimates to account for walruses in the water during the survey. The estimated area of available habitat averaged 668,000 km2 and the area of surveyed blocks was 318,204 km2. The number of Pacific walruses within the surveyed area was estimated at 129,000 with 95% confidence limits of 55,000 to 507,000 individuals. This value can be used by managers as a minimum estimate of the total population size.

  16. Robust estimators based on generalization of trimmed mean

    Czech Academy of Sciences Publication Activity Database

    Adam, Lukáš; Bejda, P.

    (2018) ISSN 0361-0918 Institutional support: RVO:67985556 Keywords : Breakdown point * Estimators * Geometric median * Location * Trimmed mean Subject RIV: BA - General Mathematics Impact factor: 0.457, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/adam-0481224.pdf

  17. Methods for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma

    Science.gov (United States)

    Esralew, Rachel A.; Smith, S. Jerrod

    2010-01-01

    Flow statistics can be used to provide decision makers with surface-water information needed for activities such as water-supply permitting, flow regulation, and other water rights issues. Flow statistics could be needed at any location along a stream. Most often, streamflow statistics are needed at ungaged sites, where no flow data are available to compute the statistics. Methods are presented in this report for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma. Flow statistics included the (1) annual (period of record), (2) seasonal (summer-autumn and winter-spring), and (3) 12 monthly duration statistics, including the 20th, 50th, 80th, 90th, and 95th percentile flow exceedances, and the annual mean-flow (mean of daily flows for the period of record). Flow statistics were calculated from daily streamflow information collected from 235 streamflow-gaging stations throughout Oklahoma and areas in adjacent states. A drainage-area ratio method is the preferred method for estimating flow statistics at an ungaged location that is on a stream near a gage. The method generally is reliable only if the drainage-area ratio of the two sites is between 0.5 and 1.5. Regression equations that relate flow statistics to drainage-basin characteristics were developed for the purpose of estimating selected flow-duration and annual mean-flow statistics for ungaged streams that are not near gaging stations on the same stream. Regression equations were developed from flow statistics and drainage-basin characteristics for 113 unregulated gaging stations. Separate regression equations were developed by using U.S. Geological Survey streamflow-gaging stations in regions with similar drainage-basin characteristics. These equations can increase the accuracy of regression equations used for estimating flow-duration and annual mean-flow statistics at ungaged stream locations in Oklahoma. Streamflow-gaging stations were grouped by selected drainage

  18. Agnostic Estimation of Mean and Covariance

    OpenAIRE

    Lai, Kevin A.; Rao, Anup B.; Vempala, Santosh

    2016-01-01

    We consider the problem of estimating the mean and covariance of a distribution from iid samples in $\\mathbb{R}^n$, in the presence of an $\\eta$ fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when $\\eta$ fraction of data is adversarially corrupted, agn...

  19. Estimation of mean-reverting oil prices: a laboratory approach

    International Nuclear Information System (INIS)

    Bjerksund, P.; Stensland, G.

    1993-12-01

    Many economic decision support tools developed for the oil industry are based on the future oil price dynamics being represented by some specified stochastic process. To meet the demand for necessary data, much effort is allocated to parameter estimation based on historical oil price time series. The approach in this paper is to implement a complex future oil market model, and to condense the information from the model to parameter estimates for the future oil price. In particular, we use the Lensberg and Rasmussen stochastic dynamic oil market model to generate a large set of possible future oil price paths. Given the hypothesis that the future oil price is generated by a mean-reverting Ornstein-Uhlenbeck process, we obtain parameter estimates by a maximum likelihood procedure. We find a substantial degree of mean-reversion in the future oil price, which in some of our decision examples leads to an almost negligible value of flexibility. 12 refs., 2 figs., 3 tabs

  20. How should we best estimate the mean recency duration for the BED method?

    Directory of Open Access Journals (Sweden)

    John Hargrove

    Full Text Available BED estimates of HIV incidence from cross-sectional surveys are obtained by restricting, to fixed time T, the period over which incidence is estimated. The appropriate mean recency duration (Ω(T then refers to the time where BED optical density (OD is less than a pre-set cut-off C, given the patient has been HIV positive for at most time T. Five methods, tested using data for postpartum women in Zimbabwe, provided similar estimates of Ω(T for C = 0.8: i The ratio (r/s of the number of BED-recent infections to all seroconversions over T = 365 days: 192 days [95% CI 168-216]. ii Linear mixed modeling (LMM: 191 days [95% CI 174-208]. iii Non-linear mixed modeling (NLMM: 196 days [95% CrI 188-204]. iv Survival analysis (SA: 192 days [95% CI 168-216]. Graphical analysis: 193 days. NLMM estimates of Ω(T--based on a biologically more appropriate functional relationship than LMM--resulted in best fits to OD data, the smallest variance in estimates of VT, and best correspondence between BED and follow-up estimates of HIV incidence, for the same subjects over the same time period. SA and NLMM produced very similar estimates of Ω(T but the coefficient of variation of the former was .3 times as high. The r/s method requires uniformly distributed seroconversion events but is useful if data are available only from a single follow-up. The graphical method produces the most variable results, involves unsound methodology and should not be used to provide estimates of Ω(T. False-recent rates increased as a quadratic function of C: for incidence estimation C should thus be chosen as small as possible, consistent with an adequate resultant number of recent cases, and accurate estimation of Ω(T. Inaccuracies in the estimation of Ω(T should not now provide an impediment to incidence estimation.

  1. Mourning dove population trend estimates from Call-Count and North American Breeding Bird Surveys

    Science.gov (United States)

    Sauer, J.R.; Dolton, D.D.; Droege, S.

    1994-01-01

    The mourning dove (Zenaida macroura) Callcount Survey and the North American Breeding Bird Survey provide information on population trends of mourning doves throughout the continental United States. Because surveys are an integral part of the development of hunting regulations, a need exists to determine which survey provides precise information. We estimated population trends from 1966 to 1988 by state and dove management unit, and assessed the relative efficiency of each survey. Estimates of population trend differ (P lt 0.05) between surveys in 11 of 48 states; 9 of 11 states with divergent results occur in the Eastern Management Unit. Differences were probably a consequence of smaller sample sizes in the Callcount Survey. The Breeding Bird Survey generally provided trend estimates with smaller variances than did the Callcount Survey. Although the Callcount Survey probably provides more withinroute accuracy because of survey methods and timing, the Breeding Bird Survey has a larger sample size of survey routes and greater consistency of coverage in the Eastern Unit.

  2. Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections.

    Directory of Open Access Journals (Sweden)

    Jason T Fisher

    Full Text Available Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT. Grizzly bears (Ursus arctos, for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears' range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error-arising when a visiting bear fails to leave a hair sample-has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012 and 76 (2013 sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation-which form the crux of management plans-require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species

  3. Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections.

    Science.gov (United States)

    Fisher, Jason T; Heim, Nicole; Code, Sandra; Paczkowski, John

    2016-01-01

    Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT). Grizzly bears (Ursus arctos), for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears' range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error-arising when a visiting bear fails to leave a hair sample-has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012) and 76 (2013) sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation-which form the crux of management plans-require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species management and

  4. Mean size estimation yields left-side bias: Role of attention on perceptual averaging.

    Science.gov (United States)

    Li, Kuei-An; Yeh, Su-Ling

    2017-11-01

    The human visual system can estimate mean size of a set of items effectively; however, little is known about whether information on each visual field contributes equally to the mean size estimation. In this study, we examined whether a left-side bias (LSB)-perceptual judgment tends to depend more heavily on left visual field's inputs-affects mean size estimation. Participants were instructed to estimate the mean size of 16 spots. In half of the trials, the mean size of the spots on the left side was larger than that on the right side (the left-larger condition) and vice versa (the right-larger condition). Our results illustrated an LSB: A larger estimated mean size was found in the left-larger condition than in the right-larger condition (Experiment 1), and the LSB vanished when participants' attention was effectively cued to the right side (Experiment 2b). Furthermore, the magnitude of LSB increased with stimulus-onset asynchrony (SOA), when spots on the left side were presented earlier than the right side. In contrast, the LSB vanished and then induced a reversed effect with SOA when spots on the right side were presented earlier (Experiment 3). This study offers the first piece of evidence suggesting that LSB does have a significant influence on mean size estimation of a group of items, which is induced by a leftward attentional bias that enhances the prior entry effect on the left side.

  5. Generating Health Estimates by Zip Code: A Semiparametric Small Area Estimation Approach Using the California Health Interview Survey.

    Science.gov (United States)

    Wang, Yueyan; Ponce, Ninez A; Wang, Pan; Opsomer, Jean D; Yu, Hongjian

    2015-12-01

    We propose a method to meet challenges in generating health estimates for granular geographic areas in which the survey sample size is extremely small. Our generalized linear mixed model predicts health outcomes using both individual-level and neighborhood-level predictors. The model's feature of nonparametric smoothing function on neighborhood-level variables better captures the association between neighborhood environment and the outcome. Using 2011 to 2012 data from the California Health Interview Survey, we demonstrate an empirical application of this method to estimate the fraction of residents without health insurance for Zip Code Tabulation Areas (ZCTAs). Our method generated stable estimates of uninsurance for 1519 of 1765 ZCTAs (86%) in California. For some areas with great socioeconomic diversity across adjacent neighborhoods, such as Los Angeles County, the modeled uninsured estimates revealed much heterogeneity among geographically adjacent ZCTAs. The proposed method can increase the value of health surveys by providing modeled estimates for health data at a granular geographic level. It can account for variations in health outcomes at the neighborhood level as a result of both socioeconomic characteristics and geographic locations.

  6. The application of mean field theory to image motion estimation.

    Science.gov (United States)

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  7. Estimating mean change in population salt intake using spot urine samples.

    Science.gov (United States)

    Petersen, Kristina S; Wu, Jason H Y; Webster, Jacqui; Grimes, Carley; Woodward, Mark; Nowson, Caryl A; Neal, Bruce

    2017-10-01

    Spot urine samples are easier to collect than 24-h urine samples and have been used with estimating equations to derive the mean daily salt intake of a population. Whether equations using data from spot urine samples can also be used to estimate change in mean daily population salt intake over time is unknown. We compared estimates of change in mean daily population salt intake based upon 24-h urine collections with estimates derived using equations based on spot urine samples. Paired and unpaired 24-h urine samples and spot urine samples were collected from individuals in two Australian populations, in 2011 and 2014. Estimates of change in daily mean population salt intake between 2011 and 2014 were obtained directly from the 24-h urine samples and by applying established estimating equations (Kawasaki, Tanaka, Mage, Toft, INTERSALT) to the data from spot urine samples. Differences between 2011 and 2014 were calculated using mixed models. A total of 1000 participants provided a 24-h urine sample and a spot urine sample in 2011, and 1012 did so in 2014 (paired samples n = 870; unpaired samples n = 1142). The participants were community-dwelling individuals living in the State of Victoria or the town of Lithgow in the State of New South Wales, Australia, with a mean age of 55 years in 2011. The mean (95% confidence interval) difference in population salt intake between 2011 and 2014 determined from the 24-h urine samples was -0.48g/day (-0.74 to -0.21; P spot urine samples was -0.24 g/day (-0.42 to -0.06; P = 0.01) using the Tanaka equation, -0.42 g/day (-0.70 to -0.13; p = 0.004) using the Kawasaki equation, -0.51 g/day (-1.00 to -0.01; P = 0.046) using the Mage equation, -0.26 g/day (-0.42 to -0.10; P = 0.001) using the Toft equation, -0.20 g/day (-0.32 to -0.09; P = 0.001) using the INTERSALT equation and -0.27 g/day (-0.39 to -0.15; P  0.058). Separate analysis of the unpaired and paired data showed that detection of

  8. Comparison of NIS and NHIS/NIPRCS vaccination coverage estimates. National Immunization Survey. National Health Interview Survey/National Immunization Provider Record Check Study.

    Science.gov (United States)

    Bartlett, D L; Ezzati-Rice, T M; Stokley, S; Zhao, Z

    2001-05-01

    The National Immunization Survey (NIS) and the National Health Interview Survey (NHIS) produce national coverage estimates for children aged 19 months to 35 months. The NIS is a cost-effective, random-digit-dialing telephone survey that produces national and state-level vaccination coverage estimates. The National Immunization Provider Record Check Study (NIPRCS) is conducted in conjunction with the annual NHIS, which is a face-to-face household survey. As the NIS is a telephone survey, potential coverage bias exists as the survey excludes children living in nontelephone households. To assess the validity of estimates of vaccine coverage from the NIS, we compared 1995 and 1996 NIS national estimates with results from the NHIS/NIPRCS for the same years. Both the NIS and the NHIS/NIPRCS produce similar results. The NHIS/NIPRCS supports the findings of the NIS.

  9. Mean Field Games Models-A Brief Survey

    KAUST Repository

    Gomes, Diogo A.

    2013-11-20

    The mean-field framework was developed to study systems with an infinite number of rational agents in competition, which arise naturally in many applications. The systematic study of these problems was started, in the mathematical community by Lasry and Lions, and independently around the same time in the engineering community by P. Caines, Minyi Huang, and Roland Malhamé. Since these seminal contributions, the research in mean-field games has grown exponentially, and in this paper we present a brief survey of mean-field models as well as recent results and techniques. In the first part of this paper, we study reduced mean-field games, that is, mean-field games, which are written as a system of a Hamilton-Jacobi equation and a transport or Fokker-Planck equation. We start by the derivation of the models and by describing some of the existence results available in the literature. Then we discuss the uniqueness of a solution and propose a definition of relaxed solution for mean-field games that allows to establish uniqueness under minimal regularity hypothesis. A special class of mean-field games that we discuss in some detail is equivalent to the Euler-Lagrange equation of suitable functionals. We present in detail various additional examples, including extensions to population dynamics models. This section ends with a brief overview of the random variables point of view as well as some applications to extended mean-field games models. These extended models arise in problems where the costs incurred by the agents depend not only on the distribution of the other agents, but also on their actions. The second part of the paper concerns mean-field games in master form. These mean-field games can be modeled as a partial differential equation in an infinite dimensional space. We discuss both deterministic models as well as problems where the agents are correlated. We end the paper with a mean-field model for price impact. © 2013 Springer Science+Business Media New York.

  10. Mean Field Games Models-A Brief Survey

    KAUST Repository

    Gomes, Diogo A.; Saú de, Joã o

    2013-01-01

    The mean-field framework was developed to study systems with an infinite number of rational agents in competition, which arise naturally in many applications. The systematic study of these problems was started, in the mathematical community by Lasry and Lions, and independently around the same time in the engineering community by P. Caines, Minyi Huang, and Roland Malhamé. Since these seminal contributions, the research in mean-field games has grown exponentially, and in this paper we present a brief survey of mean-field models as well as recent results and techniques. In the first part of this paper, we study reduced mean-field games, that is, mean-field games, which are written as a system of a Hamilton-Jacobi equation and a transport or Fokker-Planck equation. We start by the derivation of the models and by describing some of the existence results available in the literature. Then we discuss the uniqueness of a solution and propose a definition of relaxed solution for mean-field games that allows to establish uniqueness under minimal regularity hypothesis. A special class of mean-field games that we discuss in some detail is equivalent to the Euler-Lagrange equation of suitable functionals. We present in detail various additional examples, including extensions to population dynamics models. This section ends with a brief overview of the random variables point of view as well as some applications to extended mean-field games models. These extended models arise in problems where the costs incurred by the agents depend not only on the distribution of the other agents, but also on their actions. The second part of the paper concerns mean-field games in master form. These mean-field games can be modeled as a partial differential equation in an infinite dimensional space. We discuss both deterministic models as well as problems where the agents are correlated. We end the paper with a mean-field model for price impact. © 2013 Springer Science+Business Media New York.

  11. Estimating the Spatial Distribution of Groundwater Age Using Synoptic Surveys of Environmental Tracers in Streams

    Science.gov (United States)

    Gardner, W. P.

    2017-12-01

    A model which simulates tracer concentration in surface water as a function the age distribution of groundwater discharge is used to characterize groundwater flow systems at a variety of spatial scales. We develop the theory behind the model and demonstrate its application in several groundwater systems of local to regional scale. A 1-D stream transport model, which includes: advection, dispersion, gas exchange, first-order decay and groundwater inflow is coupled a lumped parameter model that calculates the concentration of environmental tracers in discharging groundwater as a function of the groundwater residence time distribution. The lumped parameters, which describe the residence time distribution, are allowed to vary spatially, and multiple environmental tracers can be simulated. This model allows us to calculate the longitudinal profile of tracer concentration in streams as a function of the spatially variable groundwater age distribution. By fitting model results to observations of stream chemistry and discharge, we can then estimate the spatial distribution of groundwater age. The volume of groundwater discharge to streams can be estimated using a subset of environmental tracers, applied tracers, synoptic stream gauging or other methods, and the age of groundwater then estimated using the previously calculated groundwater discharge and observed environmental tracer concentrations. Synoptic surveys of SF6, CFC's, 3H and 222Rn, along with measured stream discharge are used to estimate the groundwater inflow distribution and mean age for regional scale surveys of the Berland River in west-central Alberta. We find that groundwater entering the Berland has observable age, and that the age estimated using our stream survey is of similar order to limited samples from groundwater wells in the region. Our results show that the stream can be used as an easily accessible location to constrain the regional scale spatial distribution of groundwater age.

  12. Estimates of mean consequences and confidence bounds on the mean associated with low-probability seismic events in total system performance assessments

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James

    2007-01-01

    An approach is described to estimate mean consequences and confidence bounds on the mean of seismic events with low probability of breaching components of the engineered barrier system. The approach is aimed at complementing total system performance assessment models used to understand consequences of scenarios leading to radionuclide releases in geologic nuclear waste repository systems. The objective is to develop an efficient approach to estimate mean consequences associated with seismic events of low probability, employing data from a performance assessment model with a modest number of Monte Carlo realizations. The derived equations and formulas were tested with results from a specific performance assessment model. The derived equations appear to be one method to estimate mean consequences without having to use a large number of realizations. (authors)

  13. Estimation of Areal Mean Rainfall in Remote Areas Using B-SHADE Model

    Directory of Open Access Journals (Sweden)

    Tao Zhang

    2016-01-01

    Full Text Available This study presented a method to estimate areal mean rainfall (AMR using a Biased Sentinel Hospital Based Area Disease Estimation (B-SHADE model, together with biased rain gauge observations and Tropical Rainfall Measuring Mission (TRMM data, for remote areas with a sparse and uneven distribution of rain gauges. Based on the B-SHADE model, the best linear unbiased estimation of AMR could be obtained. A case study was conducted for the Three-River Headwaters region in the Tibetan Plateau of China, and its performance was compared with traditional methods. The results indicated that B-SHADE obtained the least estimation biases, with a mean error and root mean square error of −0.63 and 3.48 mm, respectively. For the traditional methods including arithmetic average, Thiessen polygon, and ordinary kriging, the mean errors were 7.11, −1.43, and 2.89 mm, which were up to 1027.1%, 127.0%, and 358.3%, respectively, greater than for the B-SHADE model. The root mean square errors were 10.31, 4.02, and 6.27 mm, which were up to 196.1%, 15.5%, and 80.0%, respectively, higher than for the B-SHADE model. The proposed technique can be used to extend the AMR record to the presatellite observation period, when only the gauge data are available.

  14. Distance estimation experiment for aerial minke whale surveys

    Directory of Open Access Journals (Sweden)

    Lars Witting

    2009-09-01

    Full Text Available A comparative study between aerial cue–counting and digital photography surveys for minke whales conducted in Faxaflói Bay in September 2003 is used to check the perpendicular distances estimated by the cue-counting observers. The study involved 2 aircraft with the photo plane at 1,700 feet flying above the cue–counting plane at 750 feet. The observer–based distance estimates were calculated from head angles estimated by angle-boards and declination angles estimated by declinometers. These distances were checked against image–based estimates of the perpendicular distance to the same whale. The 2 independent distance estimates were obtained for 21 sightings of minke whale, and there was a good agreement between the 2 types of estimates. The relative absolute deviations between the 2 estimates were on average 23% (se: 6%, with the errors in the observer–based distance estimates resembling that of a log-normal distribution. The linear regression of the observer–based estimates (obs on the image–based estimates (img was Obs=1.1Img (R2=0.85 with an intercept fixed at zero. There was no evidence of a distance estimation bias that could generate a positive bias in the absolute abundance estimated by cue–counting.

  15. Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.

    Science.gov (United States)

    Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel

    2015-12-01

    Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).

  16. Estimated rate of agricultural injury: the Korean Farmers’ Occupational Disease and Injury Survey

    OpenAIRE

    Chae, Hyeseon; Min, Kyungdoo; Youn, kanwoo; Park, Jinwoo; Kim, Kyungran; Kim, Hyocher; Lee, Kyungsuk

    2014-01-01

    Objectives This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. Methods The first Korean Farmers’ Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. ...

  17. Estimating Mean and Variance Through Quantiles : An Experimental Comparison of Different Methods

    NARCIS (Netherlands)

    Moors, J.J.A.; Strijbosch, L.W.G.; van Groenendaal, W.J.H.

    2002-01-01

    If estimates of mean and variance are needed and only experts' opinions are available, the literature agrees that it is wise behaviour to ask only for their (subjective) estimates of quantiles: from these, estimates of the desired parameters are calculated.Quite a number of methods have been

  18. New aerial survey and hierarchical model to estimate manatee abundance

    Science.gov (United States)

    Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.

    2011-01-01

    Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability

  19. Means of surveying contaminated areas resulting from overseas nuclear accidents

    International Nuclear Information System (INIS)

    Looney, J.H.H.; Thorne, M.C.; Dickson, D.M.J.

    1989-09-01

    The Chernobyl accident is briefly reviewed as a useful basis to examine some of the considerations related to the design of surveys. The plans and procedures of key European and North American countries are reviewed, as well as the plans and capabilities of UK facilities and government agencies. The survey design incorporates the concepts of land use category, topography climate, etc. and discusses the spatial and temporal scale requirements. Use of a Geographic Information System is recommended to co-ordinate the data. Models address the requirement to detect an annual effective dose equivalent of 0.5 mSv to an individual in the first year following the accident. The equipment requirements are based on transit-type vans, each, preferably, with one or two gamma spectrometers, MCA's and ancillary equipment, with three teams of two men. This unit could survey about 150 km 2 within a larger area in 3 days. The cost per survey team is estimated to be Pound 60,000 - Pound 80,000 in the first year, with annual costs of Pound 20-23,000. (author)

  20. Comparing two survey methods for estimating maternal and perinatal mortality in rural Cambodia.

    Science.gov (United States)

    Chandy, Hoeuy; Heng, Yang Van; Samol, Ha; Husum, Hans

    2008-03-01

    We need solid estimates of maternal mortality rates (MMR) to monitor the impact of maternal care programs. Cambodian health authorities and WHO report the MMR in Cambodia at 450 per 100,000 live births. The figure is drawn from surveys where information is obtained by interviewing respondents about the survival of all their adult sisters (sisterhood method). The estimate is statistically imprecise, 95% confidence intervals ranging from 260 to 620/100,000. The MMR estimate is also uncertain due to under-reporting; where 80-90% of women deliver at home maternal fatalities may go undetected especially where mortality is highest, in remote rural areas. The aim of this study was to attain more reliable MMR estimates by using survey methods other than the sisterhood method prior to an intervention targeting obstetric rural emergencies. The study was carried out in rural Northwestern Cambodia where access to health services is poor and poverty, endemic diseases, and land mines are endemic. Two survey methods were applied in two separate sectors: a community-based survey gathering data from public sources and a household survey gathering data direct from primary sources. There was no statistically significant difference between the two survey results for maternal deaths, both types of survey reported mortality rates around the public figure. The household survey reported a significantly higher perinatal mortality rate as compared to the community-based survey, 8.6% versus 5.0%. Also the household survey gave qualitative data important for a better understanding of the many problems faced by mothers giving birth in the remote villages. There are detection failures in both surveys; the failure rate may be as high as 30-40%. PRINCIPLE CONCLUSION: Both survey methods are inaccurate, therefore inappropriate for evaluation of short-term changes of mortality rates. Surveys based on primary informants yield qualitative information about mothers' hardships important for the design

  1. Eigenvalue estimates for submanifolds with bounded f-mean curvature

    Indian Academy of Sciences (India)

    GUANGYUE HUANG

    1College of Mathematics and Information Science, Henan Normal University,. Xinxiang 453007 ... submanifolds in a hyperbolic space with the norm of their mean curvature vector bounded above by a constant. ..... [2] Batista M, Cavalcante M P and Pyo J, Some isoperimetric inequalities and eigenvalue estimates in ...

  2. Aerial surveys adjusted by ground surveys to estimate area occupied by black-tailed prairie dog colonies

    Science.gov (United States)

    Sidle, John G.; Augustine, David J.; Johnson, Douglas H.; Miller, Sterling D.; Cully, Jack F.; Reading, Richard P.

    2012-01-01

    Aerial surveys using line-intercept methods are one approach to estimate the extent of prairie dog colonies in a large geographic area. Although black-tailed prairie dogs (Cynomys ludovicianus) construct conspicuous mounds at burrow openings, aerial observers have difficulty discriminating between areas with burrows occupied by prairie dogs (colonies) versus areas of uninhabited burrows (uninhabited colony sites). Consequently, aerial line-intercept surveys may overestimate prairie dog colony extent unless adjusted by an on-the-ground inspection of a sample of intercepts. We compared aerial line-intercept surveys conducted over 2 National Grasslands in Colorado, USA, with independent ground-mapping of known black-tailed prairie dog colonies. Aerial line-intercepts adjusted by ground surveys using a single activity category adjustment overestimated colonies by ≥94% on the Comanche National Grassland and ≥58% on the Pawnee National Grassland. We present a ground-survey technique that involves 1) visiting on the ground a subset of aerial intercepts classified as occupied colonies plus a subset of intercepts classified as uninhabited colony sites, and 2) based on these ground observations, recording the proportion of each aerial intercept that intersects a colony and the proportion that intersects an uninhabited colony site. Where line-intercept techniques are applied to aerial surveys or remotely sensed imagery, this method can provide more accurate estimates of black-tailed prairie dog abundance and trends

  3. Mean value estimates of the error terms of Lehmer problem

    Indian Academy of Sciences (India)

    Mean value estimates of the error terms of Lehmer problem. DONGMEI REN1 and YAMING ... For further properties of N(a,p) in [6], he studied the mean square value of the error term. E(a, p) = N(a,p) − 1. 2 (p − 1) ..... [1] Apostol Tom M, Introduction to Analytic Number Theory (New York: Springer-Verlag). (1976). [2] Guy R K ...

  4. The Design Model of Multilevel Estimation Means for Students’ Competence Assessment at Technical Higher School

    Directory of Open Access Journals (Sweden)

    O. F. Shikhova

    2012-01-01

    Full Text Available The paper considers the research findings aimed at the developing the new quality testing technique for students assessment at Technical Higher School. The model of multilevel estimation means is provided for diagnosing the level of general cultural and professional competences of students doing a bachelor degree in technological fields. The model implies the integrative character of specialists training - the combination of both the psycho-pedagogic (invariable and engineering (variable components, as well as the qualimetric approach substantiating the system of students competence estimation and providing the most adequate assessment means. The principles of designing the multilevel estimation means are defined along with the methodology approaches to their implementation. For the reasonable selection of estimation means, the system of quality criteria is proposed by the authors, being based on the group expert assessment. The research findings can be used for designing the competence-oriented estimation means

  5. Estimates of the abundance of minke whales (Balaenoptera acutorostrata from Faroese and Icelandic NASS shipboard surveys

    Directory of Open Access Journals (Sweden)

    Daniel G Pike

    2009-09-01

    Full Text Available North Atlantic Sightings Surveys for cetaceans were carried out Northeast and Central Atlantic in 1987, 1989, 1995 and 2001. Here we provide estimates of density and abundance for minke whales from the Faroese and Icelandic ship surveys. The estimates are not corrected for availability or perception biases. Double platform data collected in 2001 indicates that perception bias is likely considerable for this species. However comparison of corrected estimates of densityfrom aerial surveys with a ship survey estimate from the same area suggests that ship surveys can be nearly unbiased under optimal survey conditions with high searching effort. There were some regional changes in density over the period but no overall changes in density and abundance. Given the recent catch history for minke whales in this area, we would not expect to see changes in abundance due to exploitation that would be detectable with these surveys.

  6. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...... to the larger malignant nuclei. Finally, the variance in the volume distribution of nuclear volume is estimated by shape-independent estimates of the volume-weighted second moment of the nuclear volume, vv2, using both a manual and a computer-assisted approach. The working procedure for the description of 3-D...

  7. Nano-hardness estimation by means of Ar+ ion etching

    International Nuclear Information System (INIS)

    Bartali, R.; Micheli, V.; Gottardi, G.; Vaccari, A.; Safeen, M.K.; Laidani, N.

    2015-01-01

    When the coatings are in nano-scale, the mechanical properties cannot be easily estimated by means of the conventional methods due to: tip shape, instrument resolution, roughness, and substrate effect. In this paper, we proposed a semi-empirical method to evaluate the mechanical properties of thin films based on the sputtering rate induced by bombardment of Ar + ion. The Ar + ion bombardment was induced by ion gun implemented in Auger electron spectroscopy (AES). This procedure has been applied on a series of coatings with different structure (carbon films) and a series of coating with a different density (ZnO thin films). The coatings were deposited on Silicon substrates by RF sputtering plasma. The results show that, as predicted by Insepov et al., there is a correlation between hardness and sputtering rate. Using reference materials and a simple power law equation the estimation of the nano-hardness using an Ar + beam is possible. - Highlights: • ZnO film and Carbon films were grown on silicon using PVD. • The growth temperature was room temperature. • The hardness of the coatings was estimated by means of nanoindentation. • Evaluation of resistance of materials to the mechanical damage induced by an Ar + ion gun (AES). • The hardness have been studied and a power law with the erosion rate has been found

  8. Estimating mountain basin-mean precipitation from streamflow using Bayesian inference

    Science.gov (United States)

    Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Lundquist, Jessica D.

    2015-10-01

    Estimating basin-mean precipitation in complex terrain is difficult due to uncertainty in the topographical representativeness of precipitation gauges relative to the basin. To address this issue, we use Bayesian methodology coupled with a multimodel framework to infer basin-mean precipitation from streamflow observations, and we apply this approach to snow-dominated basins in the Sierra Nevada of California. Using streamflow observations, forcing data from lower-elevation stations, the Bayesian Total Error Analysis (BATEA) methodology and the Framework for Understanding Structural Errors (FUSE), we infer basin-mean precipitation, and compare it to basin-mean precipitation estimated using topographically informed interpolation from gauges (PRISM, the Parameter-elevation Regression on Independent Slopes Model). The BATEA-inferred spatial patterns of precipitation show agreement with PRISM in terms of the rank of basins from wet to dry but differ in absolute values. In some of the basins, these differences may reflect biases in PRISM, because some implied PRISM runoff ratios may be inconsistent with the regional climate. We also infer annual time series of basin precipitation using a two-step calibration approach. Assessment of the precision and robustness of the BATEA approach suggests that uncertainty in the BATEA-inferred precipitation is primarily related to uncertainties in hydrologic model structure. Despite these limitations, time series of inferred annual precipitation under different model and parameter assumptions are strongly correlated with one another, suggesting that this approach is capable of resolving year-to-year variability in basin-mean precipitation.

  9. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  10. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    Science.gov (United States)

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (nresearchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  11. A Class of Estimators for Finite Population Mean in Double Sampling under Nonresponse Using Fractional Raw Moments

    Directory of Open Access Journals (Sweden)

    Manzoor Khan

    2014-01-01

    Full Text Available This paper presents new classes of estimators in estimating the finite population mean under double sampling in the presence of nonresponse when using information on fractional raw moments. The expressions for mean square error of the proposed classes of estimators are derived up to the first degree of approximation. It is shown that a proposed class of estimators performs better than the usual mean estimator, ratio type estimators, and Singh and Kumar (2009 estimator. An empirical study is carried out to demonstrate the performance of a proposed class of estimators.

  12. Estimation of average causal effect using the restricted mean residual lifetime as effect measure

    DEFF Research Database (Denmark)

    Mansourvar, Zahra; Martinussen, Torben

    2017-01-01

    with respect to their survival times. In observational studies where the factor of interest is not randomized, covariate adjustment is needed to take into account imbalances in confounding factors. In this article, we develop an estimator for the average causal treatment difference using the restricted mean...... residual lifetime as target parameter. We account for confounding factors using the Aalen additive hazards model. Large sample property of the proposed estimator is established and simulation studies are conducted in order to assess small sample performance of the resulting estimator. The method is also......Although mean residual lifetime is often of interest in biomedical studies, restricted mean residual lifetime must be considered in order to accommodate censoring. Differences in the restricted mean residual lifetime can be used as an appropriate quantity for comparing different treatment groups...

  13. Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys

    Science.gov (United States)

    Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik

    2011-01-01

    The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...

  14. Comparing cancer screening estimates: Behavioral Risk Factor Surveillance System and National Health Interview Survey.

    Science.gov (United States)

    Sauer, Ann Goding; Liu, Benmei; Siegel, Rebecca L; Jemal, Ahmedin; Fedewa, Stacey A

    2018-01-01

    Cancer screening prevalence from the Behavioral Risk Factor Surveillance System (BRFSS), designed to provide state-level estimates, and the National Health Interview Survey (NHIS), designed to provide national estimates, are used to measure progress in cancer control. A detailed description of the extent to which recent cancer screening estimates vary by key demographic characteristics has not been previously described. We examined national prevalence estimates for recommended breast, cervical, and colorectal cancer screening using data from the 2012 and 2014 BRFSS and the 2010 and 2013 NHIS. Treating the NHIS estimates as the reference, direct differences (DD) were calculated by subtracting NHIS estimates from BRFSS estimates. Relative differences were computed by dividing the DD by the NHIS estimates. Two-sample t-tests (2-tails), were performed to test for statistically significant differences. BRFSS screening estimates were higher than those from NHIS for breast (78.4% versus 72.5%; DD=5.9%, pNHIS, each survey has a unique and important role in providing information to track cancer screening utilization among various populations. Awareness of these differences and their potential causes is important when comparing the surveys and determining the best application for each data source. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Estimating international interindustry linkages : Non-survey simulations of the Asian-Pacific economy

    NARCIS (Netherlands)

    Oosterhaven, J.; Stelder, T.M.

    2008-01-01

    This paper evaluates a recently published semi-survey international input-output table for nine East-Asian countries and the USA with four non-survey estimation alternatives. A new generalized RAS procedure is used with stepwise increasing information from both import and export statistics as

  16. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    Science.gov (United States)

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  17. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...... of volume, a detailed investigation of nuclear size variability is possible. Benign and malignant nuclear populations show approximately the same relative variability with regard to nuclear volume, and the presented data are compatible with a simple size transformation from the smaller benign nuclei...

  18. Using survey data on inflation expectations in the estimation of learning and rational expectations models

    NARCIS (Netherlands)

    Ormeño, A.

    2012-01-01

    Do survey data on inflation expectations contain useful information for estimating macroeconomic models? I address this question by using survey data in the New Keynesian model by Smets and Wouters (2007) to estimate and compare its performance when solved under the assumptions of Rational

  19. Utility Estimation for Pediatric Vesicoureteral Reflux: Methodological Considerations Using an Online Survey Platform.

    Science.gov (United States)

    Tejwani, Rohit; Wang, Hsin-Hsiao S; Lloyd, Jessica C; Kokorowski, Paul J; Nelson, Caleb P; Routh, Jonathan C

    2017-03-01

    The advent of online task distribution has opened a new avenue for efficiently gathering community perspectives needed for utility estimation. Methodological consensus for estimating pediatric utilities is lacking, with disagreement over whom to sample, what perspective to use (patient vs parent) and whether instrument induced anchoring bias is significant. We evaluated what methodological factors potentially impact utility estimates for vesicoureteral reflux. Cross-sectional surveys using a time trade-off instrument were conducted via the Amazon Mechanical Turk® (https://www.mturk.com) online interface. Respondents were randomized to answer questions from child, parent or dyad perspectives on the utility of a vesicoureteral reflux health state and 1 of 3 "warm-up" scenarios (paralysis, common cold, none) before a vesicoureteral reflux scenario. Utility estimates and potential predictors were fitted to a generalized linear model to determine what factors most impacted utilities. A total of 1,627 responses were obtained. Mean respondent age was 34.9 years. Of the respondents 48% were female, 38% were married and 44% had children. Utility values were uninfluenced by child/personal vesicoureteral reflux/urinary tract infection history, income or race. Utilities were affected by perspective and were higher in the child group (34% lower in parent vs child, p pediatric conditions. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  20. Adjusting forest density estimates for surveyor bias in historical tree surveys

    Science.gov (United States)

    Brice B. Hanberry; Jian Yang; John M. Kabrick; Hong S. He

    2012-01-01

    The U.S. General Land Office surveys, conducted between the late 1700s to early 1900s, provide records of trees prior to widespread European and American colonial settlement. However, potential and documented surveyor bias raises questions about the reliability of historical tree density estimates and other metrics based on density estimated from these records. In this...

  1. Self-reported physical activity among blacks: estimates from national surveys.

    Science.gov (United States)

    Whitt-Glover, Melicia C; Taylor, Wendell C; Heath, Gregory W; Macera, Caroline A

    2007-11-01

    National surveillance data provide population-level estimates of physical activity participation, but generally do not include detailed subgroup analyses, which could provide a better understanding of physical activity among subgroups. This paper presents a descriptive analysis of self-reported regular physical activity among black adults using data from the 2003 Behavioral Risk Factor Surveillance System (n=19,189), the 2004 National Health Interview Survey (n=4263), and the 1999-2004 National Health and Nutrition Examination Survey (n=3407). Analyses were conducted between January and March 2006. Datasets were analyzed separately to estimate the proportion of black adults meeting national physical activity recommendations overall and stratified by gender and other demographic subgroups. The proportion of black adults reporting regular PA ranged from 24% to 36%. Regular physical activity was highest among men; younger age groups; highest education and income groups; those who were employed and married; overweight, but not obese, men; and normal-weight women. This pattern was consistent across surveys. The observed physical activity patterns were consistent with national trends. The data suggest that older black adults and those with low education and income levels are at greatest risk for inactive lifestyles and may require additional attention in efforts to increase physical activity in black adults. The variability across datasets reinforces the need for objective measures in national surveys.

  2. Estimation of mean grain size of seafloor sediments using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    De, C.; Chakraborty, B.

    The feasibility of an artificial neural network based approach is investigated to estimate the values of mean grain size of seafloor sediments using four dominant echo features, extracted from acoustic backscatter data. The acoustic backscatter data...

  3. Mean total arsenic concentrations in chicken 1989-2000 and estimated exposures for consumers of chicken.

    OpenAIRE

    Lasky, Tamar; Sun, Wenyu; Kadry, Abdel; Hoffman, Michael K

    2004-01-01

    The purpose of this study was to estimate mean concentrations of total arsenic in chicken liver tissue and then estimate total and inorganic arsenic ingested by humans through chicken consumption. We used national monitoring data from the Food Safety and Inspection Service National Residue Program to estimate mean arsenic concentrations for 1994-2000. Incorporating assumptions about the concentrations of arsenic in liver and muscle tissues as well as the proportions of inorganic and organic a...

  4. Nano-hardness estimation by means of Ar{sup +} ion etching

    Energy Technology Data Exchange (ETDEWEB)

    Bartali, R., E-mail: bartali@fbk.eu; Micheli, V.; Gottardi, G.; Vaccari, A.; Safeen, M.K.; Laidani, N.

    2015-08-31

    When the coatings are in nano-scale, the mechanical properties cannot be easily estimated by means of the conventional methods due to: tip shape, instrument resolution, roughness, and substrate effect. In this paper, we proposed a semi-empirical method to evaluate the mechanical properties of thin films based on the sputtering rate induced by bombardment of Ar{sup +} ion. The Ar{sup +} ion bombardment was induced by ion gun implemented in Auger electron spectroscopy (AES). This procedure has been applied on a series of coatings with different structure (carbon films) and a series of coating with a different density (ZnO thin films). The coatings were deposited on Silicon substrates by RF sputtering plasma. The results show that, as predicted by Insepov et al., there is a correlation between hardness and sputtering rate. Using reference materials and a simple power law equation the estimation of the nano-hardness using an Ar{sup +} beam is possible. - Highlights: • ZnO film and Carbon films were grown on silicon using PVD. • The growth temperature was room temperature. • The hardness of the coatings was estimated by means of nanoindentation. • Evaluation of resistance of materials to the mechanical damage induced by an Ar{sup +} ion gun (AES). • The hardness have been studied and a power law with the erosion rate has been found.

  5. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  6. Insights from Machine Learning for Evaluating Production Function Estimators on Manufacturing Survey Data

    OpenAIRE

    Arreola, José Luis Preciado; Johnson, Andrew L.

    2016-01-01

    Organizations like census bureaus rely on non-exhaustive surveys to estimate industry population-level production functions. In this paper we propose selecting an estimator based on a weighting of its in-sample and predictive performance on actual application datasets. We compare Cobb-Douglas functional assumptions to existing nonparametric shape constrained estimators and a newly proposed estimated presented in this paper. For simulated data, we find that our proposed estimator has the lowes...

  7. Estimation of Social Exclusion Indicators from Complex Surveys: The R Package laeken

    Directory of Open Access Journals (Sweden)

    Andreas Alfons

    2013-09-01

    Full Text Available Units sampled from finite populations typically come with different inclusion proba- bilities. Together with additional preprocessing steps of the raw data, this yields unequal sampling weights of the observations. Whenever indicators are estimated from such com- plex samples, the corresponding sampling weights have to be taken into account. In addition, many indicators suffer from a strong influence of outliers, which are a common problem in real-world data. The R package laeken is an object-oriented toolkit for the estimation of indicators from complex survey samples via standard or robust methods. In particular the most widely used social exclusion and poverty indicators are imple- mented in the package. A general calibrated bootstrap method to estimate the variance of indicators for common survey designs is included as well. Furthermore, the package contains synthetically generated close-to-reality data for the European Union Statistics on Income and Living Conditions and the Structure of Earnings Survey, which are used in the code examples throughout the paper. Even though the paper is focused on showing the functionality of package laeken, it also provides a brief mathematical description of the implemented indicator methodology.

  8. New a priori estimates for mean-field games with congestion

    KAUST Repository

    Evangelista, David; Gomes, Diogo A.

    2016-01-01

    We present recent developments in crowd dynamics models (e.g. pedestrian flow problems). Our formulation is given by a mean-field game (MFG) with congestion. We start by reviewing earlier models and results. Next, we develop our model. We establish new a priori estimates that give partial regularity of the solutions. Finally, we discuss numerical results.

  9. New a priori estimates for mean-field games with congestion

    KAUST Repository

    Evangelista, David

    2016-01-06

    We present recent developments in crowd dynamics models (e.g. pedestrian flow problems). Our formulation is given by a mean-field game (MFG) with congestion. We start by reviewing earlier models and results. Next, we develop our model. We establish new a priori estimates that give partial regularity of the solutions. Finally, we discuss numerical results.

  10. Faculty Prayer in Catholic Schools: A Survey of Practices and Meaning

    Science.gov (United States)

    Mayotte, Gail

    2010-01-01

    This article presents a research study that utilized a web-based survey to gather data about the communal prayer experiences of faculty members in Catholic elementary and secondary schools in the United States and the meaning that such prayer holds to its participants. Key findings show that faculty prayer experiences take place readily, though…

  11. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  12. Estimating health expectancies from two cross-sectional surveys: The intercensal method

    Directory of Open Access Journals (Sweden)

    Michel Guillot

    2009-10-01

    Full Text Available Health expectancies are key indicators for monitoring the health of populations, as well as for informing debates about compression or expansion of morbidity. However, current methodologies for estimating them are not entirely satisfactory. They are either of limited applicability because of high data requirements (the multistate method or based on questionable assumptions (the Sullivan method. This paper proposes a new method, called the "intercensal" method, which relies on the multistate framework but uses widely available data. The method uses age-specific proportions "healthy" at two successive, independent cross-sectional health surveys, and, together with information on general mortality, solves for the set of transition probabilities that produces the observed sequence of proportions healthy. The system is solved by making realistic parametric assumptions about the age patterns of transition probabilities. Using data from the Health and Retirement Survey (HRS and from the National Health Interview Survey (NHIS, the method is tested against both the multistate method and the Sullivan method. We conclude that the intercensal approach is a promising framework for the indirect estimation of health expectancies.

  13. The impact of the mode of survey administration on estimates of daily smoking for mobile phone only users

    Directory of Open Access Journals (Sweden)

    Joseph Hanna

    2017-04-01

    Full Text Available Abstract Background Over the past decade, there have been substantial changes in landline and mobile phone ownership, with a substantial increase in the proportion of mobile-only households. Estimates of daily smoking rates for the mobile phone only (MPO population have been found to be substantially higher than the rest of the population and telephone surveys that use a dual sampling frame (landline and mobile phones are now considered best practice. Smoking is seen as an undesirable behaviour; measuring such behaviours using an interviewer may lead to lower estimates when using telephone based surveys compared to self-administered approaches. This study aims to assess whether higher daily smoking estimates observed for the mobile phone only population can be explained by administrative features of surveys, after accounting for differences in the phone ownership population groups. Methods Data on New South Wales (NSW residents aged 18 years or older from the NSW Population Health Survey (PHS, a telephone survey, and the National Drug Strategy Household Survey (NDSHS, a self-administered survey, were combined, with weights adjusted to match the 2013 population. Design-adjusted prevalence estimates and odds ratios were calculated using survey analysis procedures available in SAS 9.4. Results Both the PHS and NDSHS gave the same estimates for daily smoking (12% and similar estimates for MPO users (20% and 18% respectively. Pooled data showed that daily smoking was 19% for MPO users, compared to 10% for dual phone owners, and 12% for landline phone only users. Prevalence estimates for MPO users across both surveys were consistently higher than other phone ownership groups. Differences in estimates for the MPO population compared to other phone ownership groups persisted even after adjustment for the mode of collection and demographic factors. Conclusions Daily smoking rates were consistently higher for the mobile phone only population and this was

  14. Using cost-effectiveness estimates from survey data to guide commissioning: an application to home care.

    Science.gov (United States)

    Forder, Julien; Malley, Juliette; Towers, Ann-Marie; Netten, Ann

    2014-08-01

    The aim is to describe and trial a pragmatic method to produce estimates of the incremental cost-effectiveness of care services from survey data. The main challenge is in estimating the counterfactual; that is, what the patient's quality of life would be if they did not receive that level of service. A production function method is presented, which seeks to distinguish the variation in care-related quality of life in the data that is due to service use as opposed to other factors. A problem is that relevant need factors also affect the amount of service used and therefore any missing factors could create endogeneity bias. Instrumental variable estimation can mitigate this problem. This method was applied to a survey of older people using home care as a proof of concept. In the analysis, we were able to estimate a quality-of-life production function using survey data with the expected form and robust estimation diagnostics. The practical advantages with this method are clear, but there are limitations. It is computationally complex, and there is a risk of misspecification and biased results, particularly with IV estimation. One strategy would be to use this method to produce preliminary estimates, with a full trial conducted thereafter, if indicated. Copyright © 2013 John Wiley & Sons, Ltd.

  15. On the mean squared error of the ridge estimator of the covariance and precision matrix

    NARCIS (Netherlands)

    van Wieringen, Wessel N.

    2017-01-01

    For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.

  16. Estimating trends in alligator populations from nightlight survey data

    Science.gov (United States)

    Fujisaki, Ikuko; Mazzotti, Frank J.; Dorazio, Robert M.; Rice, Kenneth G.; Cherkiss, Michael; Jeffery, Brian

    2011-01-01

    Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.

  17. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

    Directory of Open Access Journals (Sweden)

    David Shilane

    2013-01-01

    Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

  18. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  19. Statistical properties of the anomalous scaling exponent estimator based on time-averaged mean-square displacement

    Science.gov (United States)

    Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis

    2017-08-01

    The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.

  20. Estimating survival probabilities by exposure levels: utilizing vital statistics and complex survey data with mortality follow-up.

    Science.gov (United States)

    Landsman, V; Lou, W Y W; Graubard, B I

    2015-05-20

    We present a two-step approach for estimating hazard rates and, consequently, survival probabilities, by levels of general categorical exposure. The resulting estimator utilizes three sources of data: vital statistics data and census data are used at the first step to estimate the overall hazard rate for a given combination of gender and age group, and cohort data constructed from a nationally representative complex survey with linked mortality records, are used at the second step to divide the overall hazard rate by exposure levels. We present an explicit expression for the resulting estimator and consider two methods for variance estimation that account for complex multistage sample design: (1) the leaving-one-out jackknife method, and (2) the Taylor linearization method, which provides an analytic formula for the variance estimator. The methods are illustrated with smoking and all-cause mortality data from the US National Health Interview Survey Linked Mortality Files, and the proposed estimator is compared with a previously studied crude hazard rate estimator that uses survey data only. The advantages of a two-step approach and possible extensions of the proposed estimator are discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2015-01-01

    In this work we consider the problem of feature enhancement for noise-robust automatic speech recognition (ASR). We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features, which is based on a minimum number of well-established, theoretically consistent......-of-the-art MFCC feature enhancement algorithms within this class of algorithms, while theoretically suboptimal or based on theoretically inconsistent assumptions, perform close to optimally in the MMSE sense....

  2. Using cross-sectional surveys to estimate the number of severely malnourished children needing to be enrolled in specific treatment programmes

    DEFF Research Database (Denmark)

    Dale, Nancy M; Myatt, Mark; Prudhon, Claudine

    2017-01-01

    OBJECTIVE: When planning severe acute malnutrition (SAM) treatment services, estimates of the number of children requiring treatment are needed. Prevalence surveys, used with population estimates, can directly estimate the number of prevalent cases but not the number of subsequent incident cases...... in different contexts. DESIGN: Observational study, with J estimated by correlating expected numbers of children to be treated, based on prevalence surveys, population estimates and assumed coverage, with the observed numbers of SAM patients treated. SETTING: Survey and programme data from six African...

  3. Age synthesis and estimation via faces: a survey.

    Science.gov (United States)

    Fu, Yun; Guo, Guodong; Huang, Thomas S

    2010-11-01

    Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.

  4. A technical survey on tire-road friction estimation

    Institute of Scientific and Technical Information of China (English)

    Seyedmeysam KHALEGHIAN; Anahita EMAMI; Saied TAHERI

    2017-01-01

    Lack of driver's knowledge about the abrupt changes in pavement's friction and poor performance of the vehicle's stability,traction,and ABS controllers on the low friction surfaces are the most important factors affecting car crashes.Due to its direct relation to vehicle stability,accurate estimation of tire-road friction is of interest to all vehicle and tire companies.Many studies have been conducted in this field and researchers have used different tools and have proposed different algorithms.This literature survey introduces different approaches,which have been widely used to estimate the friction or other related parameters,and covers the recent literature that contains these methodologies.The emphasize of this review paper is on the algorithms and studies,which are more popular and have been repeated several times.The focus has been divided into two main groups:experiment-based and model-based approaches.Each of these main groups has several sub-categories,which are explained in the next few sections.Several summary tables are provided in which the overall feature of each approach is reviewed that gives the reader the general picture of different algorithms,which are widely used in friction estimation studies.

  5. Integrating national surveys to estimate small area variations in poor health and limiting long-term illness in Great Britain.

    Science.gov (United States)

    Moon, Graham; Aitken, Grant; Taylor, Joanna; Twigg, Liz

    2017-08-28

    This study aims to address, for the first time, the challenges of constructing small area estimates of health status using linked national surveys. The study also seeks to assess the concordance of these small area estimates with data from national censuses. Population level health status in England, Scotland and Wales. A linked integrated dataset of 23 374 survey respondents (16+ years) from the 2011 waves of the Health Survey for England (n=8603), the Scottish Health Survey (n=7537) and the Welsh Health Survey (n=7234). Population prevalence of poorer self-rated health and limiting long-term illness. A multilevel small area estimation modelling approach was used to estimate prevalence of these outcomes for middle super output areas in England and Wales and intermediate zones in Scotland. The estimates were then compared with matched measures from the contemporaneous 2011 UK Census. There was a strong positive association between the small area estimates and matched census measures for all three countries for both poorer self-rated health (r=0.828, 95% CI 0.821 to 0.834) and limiting long-term illness (r=0.831, 95% CI 0.824 to 0.837), although systematic differences were evident, and small area estimation tended to indicate higher prevalences than census data. Despite strong concordance, variations in the small area prevalences of poorer self-rated health and limiting long-term illness evident in census data cannot be replicated perfectly using small area estimation with linked national surveys. This reflects a lack of harmonisation between surveys over question wording and design. The nature of small area estimates as 'expected values' also needs to be better understood. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  7. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  8. Mean atmospheric temperature model estimation for GNSS meteorology using AIRS and AMSU data

    Directory of Open Access Journals (Sweden)

    Rata Suwantong

    2017-03-01

    Full Text Available In this paper, the problem of modeling the relationship between the mean atmospheric and air surface temperatures is addressed. Particularly, the major goal is to estimate the model parameters at a regional scale in Thailand. To formulate the relationship between the mean atmospheric and air surface temperatures, a triply modulated cosine function was adopted to model the surface temperature as a periodic function. The surface temperature was then converted to mean atmospheric temperature using a linear function. The parameters of the model were estimated using an extended Kalman filter. Traditionally, radiosonde data is used. In this paper, satellite data from an atmospheric infrared sounder, and advanced microwave sounding unit sensors was used because it is open source data and has global coverage with high temporal resolution. The performance of the proposed model was tested against that of a global model via an accuracy assessment of the computed GNSS-derived PWV.

  9. Small area estimation for estimating the number of infant mortality in West Java, Indonesia

    Science.gov (United States)

    Anggreyani, Arie; Indahwati, Kurnia, Anang

    2016-02-01

    Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.

  10. A global mean ocean circulation estimation using goce gravity models - the DTU12MDT mean dynamic topography model

    DEFF Research Database (Denmark)

    Knudsen, Per; Andersen, Ole Baltazar

    2012-01-01

    The Gravity and Ocean Circulation Experiment - GOCE satellite mission measure the Earth gravity field with unprecedented accuracy leading to substantial improvements in the modelling of the ocean circulation and transport. In this study of the performance of GOCE, a newer gravity model have been...... combined with the DTU10MSS mean sea surface model to construct a global mean dynamic topography model named DTU10MDT. The results of preliminary analyses using preliminary GOCE gravity models clearly demonstrated the potential of GOCE mission. Both the resolution and the estimation of the surface currents...... have been improved significantly compared to results obtained using pre-GOCE gravity field models. The results of this study show that geostrophic surface currents associated with the mean circulation have been further improved and that currents having speeds down to 5 cm/s have been recovered....

  11. An Improved Weise’s Rule for Efficient Estimation of Stand Quadratic Mean Diameter

    Directory of Open Access Journals (Sweden)

    Róbert Sedmák

    2015-07-01

    Full Text Available The main objective of this study was to explore the accuracy of Weise’s rule of thumb applied to an estimation of the quadratic mean diameter of a forest stand. Virtual stands of European beech (Fagus sylvatica L. across a range of structure types were stochastically generated and random sampling was simulated. We compared the bias and accuracy of stand quadratic mean diameter estimates, employing different ranks of measured stems from a set of the 10 trees nearest to the sampling point. We proposed several modifications of the original Weise’s rule based on the measurement and averaging of two different ranks centered to a target rank. In accordance with the original formulation of the empirical rule, we recommend the application of the measurement of the 6th stem in rank corresponding to the 55% sample percentile of diameter distribution, irrespective of mean diameter size and degree of diameter dispersion. The study also revealed that the application of appropriate two-measurement modifications of Weise’s method, the 4th and 8th ranks or 3rd and 9th ranks averaged to the 6th central rank, should be preferred over the classic one-measurement estimation. The modified versions are characterised by an improved accuracy (about 25% without statistically significant bias and measurement costs comparable to the classic Weise method.

  12. Assessment of sampling strategies for estimation of site mean concentrations of stormwater pollutants.

    Science.gov (United States)

    McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana

    2018-02-01

    The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2  = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown

  13. How social processes distort measurement: the impact of survey nonresponse on estimates of volunteer work in the United States.

    Science.gov (United States)

    Abraham, Katharine G; Presser, Stanley; Helms, Sara

    2009-01-01

    The authors argue that both the large variability in survey estimates of volunteering and the fact that survey estimates do not show the secular decline common to other social capital measures are caused by the greater propensity of those who do volunteer work to respond to surveys. Analyses of the American Time Use Survey (ATUS)--the sample for which is drawn from the Current Population Survey (CPS)--together with the CPS volunteering supplement show that CPS respondents who become ATUS respondents report much more volunteering in the CPS than those who become ATUS nonrespondents. This difference is replicated within subgroups. Consequently, conventional adjustments for nonresponse cannot correct the bias. Although nonresponse leads to estimates of volunteer activity that are too high, it generally does not affect inferences about the characteristics of volunteers.

  14. Estimation of global solar radiation by means of sunshine duration

    Energy Technology Data Exchange (ETDEWEB)

    Luis, Mazorra Aguiar; Felipe, Diaz Reyes [Electrical Engineering Dept., Las Palmas de Gran Canaria Univ. (U.L.P.G.C.), Campus Univ. Tafira (Spain); Pilar, Navarro Rivero [Canary Islands Technological Inst. (I.T.C.), Gran Canaria (Spain)

    2008-07-01

    This paper analyses the relationship between global solar irradiation and sunshine duration with different estimation models for the island of Gran Canaria (Spain). These parameters were taken from six measurement stations around the Island, and selected for their reliability and the long period of time they covered. All data used in this paper were handed over by the Canary Islands Technological Institute (I.T.C.). As a first approach, it was decided to study the Angstrom lineal model. In order to improve the knowledge on solar resources, a Typical Meteorological Year (TMY) was created from all daily data. TMY shows differences between southern and northern locations, where Trade Winds generate clouds during the summer months. TMY resumes a data bank much longer than a year in duration, generating the characteristics for a year series of each location, for both irradiation and sunshine duration. To create the TMY, weighted means have been used to smooth high or low values. At first, Angstrom lineal model has been used to estimate solar global irradiation from sunshine duration values, using TMY. But the lineal model didn't reproduce satisfactory results when used to obtain global solar radiation from all daily sunshine duration data. For this reason, different models based in both parameters were used. The parameters estimation of this model was achieved both from TMY daily and monthly series and from all daily data for every location. Because of the weather stability all over the year in the Island, most of the daily data are concentrated in a close range, occasioning a deviation in the lineal equations. To avoid this deviation it was proposed to consider a limit condition data, taking into account values out of the main cloud of data. Additionally, different models were proposed (quadratic, cubic, logarithmic and exponential) to make a regression from all daily data. The best results were obtained with the exponential model proposed in this paper. The

  15. The estimation of local marine dispersion of radionuclides from hydrographic survey data

    International Nuclear Information System (INIS)

    Maul, P.R.

    1985-05-01

    One of the most important stages in the assessment of the radiological impact of routine discharges of activity to the sea is the estimation of the local dispersion characteristics. Existing methods for defining the parameters required by the computer program CODAR2 are expanded to take into account the significance of the turbulence generated by the discharge, the effect of a shelving sea bed and the variation with time of the lateral dispersion coefficient. These methods also enable the importance of the timing of discharges and the variation of radionuclide concentrations along the coast to be considered. Calculations of local marine dispersion depend directly upon the information that is available from hydrographic surveys. Detailed consideration is given to the definition of model parameter values from data that are generally available from such surveys. The uncertainties involved in mathematical modelling and parameter specification suggest that the long term average radionuclide concentration in the vicinity of the release can be estimated to within a factor of 2 or 3, with estimates more likely to be greater than, rather than less than the actual value. This uncertainty will contribute to the net uncertainty in any radiological assessment of critical group exposure. (author)

  16. Estimating family planning coverage from contraceptive prevalence using national household surveys.

    Science.gov (United States)

    Barros, Aluisio J D; Boerma, Ties; Hosseinpoor, Ahmad R; Restrepo-Méndez, María C; Wong, Kerry L M; Victora, Cesar G

    2015-01-01

    Contraception is one of the most important health interventions currently available and yet, many women and couples still do not have reliable access to modern contraceptives. The best indicator for monitoring family planning is the proportion of women using contraception among those who need it. This indicator is frequently called demand for family planning satisfied and we argue that it should be called family planning coverage (FPC). This indicator is complex to calculate and requires a considerable number of questions to be included in a household survey. We propose a model that can predict FPC from a much simpler indicator - contraceptive use prevalence - for situations where it cannot be derived directly. Using 197 Multiple Indicator Cluster Surveys and Demographic and Health Surveys from 82 countries, we explored least-squares regression models that could be used to predict FPC. Non-linearity was expected in this situation and we used a fractional polynomial approach to find the best fitting model. We also explored the effect of calendar time and of wealth on the models explored. Given the high correlation between the variables involved in FPC, we managed to derive a relatively simple model that depends only on contraceptive use prevalence but explains 95% of the variability of the outcome, with high precision for the estimated regression line. We also show that the relationship between the two variables has not changed with time. A concordance analysis showed agreement between observed and fitted results within a range of ±9 percentage points. We show that it is possible to obtain fairly good estimates of FPC using only contraceptive prevalence as a predictor, a strategy that is useful in situations where it is not possible to estimate FPC directly.

  17. Uncertainties estimation in surveying measurands: application to lengths, perimeters and areas

    Science.gov (United States)

    Covián, E.; Puente, V.; Casero, M.

    2017-10-01

    The present paper develops a series of methods for the estimation of uncertainty when measuring certain measurands of interest in surveying practice, such as points elevation given a planimetric position within a triangle mesh, 2D and 3D lengths (including perimeters enclosures), 2D areas (horizontal surfaces) and 3D areas (natural surfaces). The basis for the proposed methodology is the law of propagation of variance-covariance, which, applied to the corresponding model for each measurand, allows calculating the resulting uncertainty from known measurement errors. The methods are tested first in a small example, with a limited number of measurement points, and then in two real-life measurements. In addition, the proposed methods have been incorporated to commercial software used in the field of surveying engineering and focused on the creation of digital terrain models. The aim of this evolution is, firstly, to comply with the guidelines of the BIPM (Bureau International des Poids et Mesures), as the international reference agency in the field of metrology, in relation to the determination and expression of uncertainty; and secondly, to improve the quality of the measurement by indicating the uncertainty associated with a given level of confidence. The conceptual and mathematical developments for the uncertainty estimation in the aforementioned cases were conducted by researchers from the AssIST group at the University of Oviedo, eventually resulting in several different mathematical algorithms implemented in the form of MATLAB code. Based on these prototypes, technicians incorporated the referred functionality to commercial software, developed in C++. As a result of this collaboration, in early 2016 a new version of this commercial software was made available, which will be the first, as far as the authors are aware, that incorporates the possibility of estimating the uncertainty for a given level of confidence when computing the aforementioned surveying

  18. INCLUSION RATIO BASED ESTIMATOR FOR THE MEAN LENGTH OF THE BOOLEAN LINE SEGMENT MODEL WITH AN APPLICATION TO NANOCRYSTALLINE CELLULOSE

    Directory of Open Access Journals (Sweden)

    Mikko Niilo-Rämä

    2014-06-01

    Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.

  19. Estimation of unsteady lift on a pitching airfoil from wake velocity surveys

    Science.gov (United States)

    Zaman, K. B. M. Q.; Panda, J.; Rumsey, C. L.

    1993-01-01

    The results of a joint experimental and computational study on the flowfield over a periodically pitched NACA0012 airfoil, and the resultant lift variation, are reported in this paper. The lift variation over a cycle of oscillation, and hence the lift hysteresis loop, is estimated from the velocity distribution in the wake measured or computed for successive phases of the cycle. Experimentally, the estimated lift hysteresis loops are compared with available data from the literature as well as with limited force balance measurements. Computationally, the estimated lift variations are compared with the corresponding variation obtained from the surface pressure distribution. Four analytical formulations for the lift estimation from wake surveys are considered and relative successes of the four are discussed.

  20. Estimation of monthly-mean daily global solar radiation based on MODIS and TRMM products

    International Nuclear Information System (INIS)

    Qin, Jun; Chen, Zhuoqi; Yang, Kun; Liang, Shunlin; Tang, Wenjun

    2011-01-01

    Global solar radiation (GSR) is required in a large number of fields. Many parameterization schemes are developed to estimate it using routinely measured meteorological variables, since GSR is directly measured at a limited number of stations. Even so, meteorological stations are sparse, especially, in remote areas. Satellite signals (radiance at the top of atmosphere in most cases) can be used to estimate continuous GSR in space. However, many existing remote sensing products have a relatively coarse spatial resolution and these inversion algorithms are too complicated to be mastered by experts in other research fields. In this study, the artificial neural network (ANN) is utilized to build the mathematical relationship between measured monthly-mean daily GSR and several high-level remote sensing products available for the public, including Moderate Resolution Imaging Spectroradiometer (MODIS) monthly averaged land surface temperature (LST), the number of days in which the LST retrieval is performed in 1 month, MODIS enhanced vegetation index, Tropical Rainfall Measuring Mission satellite (TRMM) monthly precipitation. After training, GSR estimates from this ANN are verified against ground measurements at 12 radiation stations. Then, comparisons are performed among three GSR estimates, including the one presented in this study, a surface data-based estimate, and a remote sensing product by Japan Aerospace Exploration Agency (JAXA). Validation results indicate that the ANN-based method presented in this study can estimate monthly-mean daily GSR at a spatial resolution of about 5 km with high accuracy.

  1. Should total landings be used to correct estimated catch in numbers or mean-weight-at-age?

    DEFF Research Database (Denmark)

    Lewy, Peter; Lassen, H.

    1997-01-01

    Many ICES fish stock assessment working groups have practised Sum Of Products, SOP, correction. This correction stems from a comparison of total weights of the known landings and the SOP over age of catch in number and mean weight-at-age, which ideally should be identical. In case of SOP...... discrepancies some countries correct catch in numbers while others correct mean weight-at-age by a common factor, the ratio between landing and SOP. The paper shows that for three sampling schemes the SOP corrections are statistically incorrect and should not be made since the SOP is an unbiased estimate...... of the total landings. Calculation of the bias of estimated catch in numbers and mean weight-at-age shows that SOP corrections of either of these estimates may increase the bias. Furthermore, for five demersal and one pelagic North Sea species it is shown that SOP discrepancies greater than 2% from...

  2. Estimating Horizontal Displacement between DEMs by Means of Particle Image Velocimetry Techniques

    Directory of Open Access Journals (Sweden)

    Juan F. Reinoso

    2015-12-01

    Full Text Available To date, digital terrain model (DTM accuracy has been studied almost exclusively by computing its height variable. However, the largely ignored horizontal component bears a great influence on the positional accuracy of certain linear features, e.g., in hydrological features. In an effort to fill this gap, we propose a means of measurement different from the geomatic approach, involving fluid mechanics (water and air flows or aerodynamics. The particle image velocimetry (PIV algorithm is proposed as an estimator of horizontal differences between digital elevation models (DEM in grid format. After applying a scale factor to the displacement estimated by the PIV algorithm, the mean error predicted is around one-seventh of the cell size of the DEM with the greatest spatial resolution, and around one-nineteenth of the cell size of the DEM with the least spatial resolution. Our methodology allows all kinds of DTMs to be compared once they are transformed into DEM format, while also allowing comparison of data from diverse capture methods, i.e., LiDAR versus photogrammetric data sources.

  3. Estimating micro area behavioural risk factor prevalence from large population-based surveys: a full Bayesian approach

    Directory of Open Access Journals (Sweden)

    L. Seliske

    2016-06-01

    Full Text Available Abstract Background An important public health goal is to decrease the prevalence of key behavioural risk factors, such as tobacco use and obesity. Survey information is often available at the regional level, but heterogeneity within large geographic regions cannot be assessed. Advanced spatial analysis techniques are demonstrated to produce sensible micro area estimates of behavioural risk factors that enable identification of areas with high prevalence. Methods A spatial Bayesian hierarchical model was used to estimate the micro area prevalence of current smoking and excess bodyweight for the Erie-St. Clair region in southwestern Ontario. Estimates were mapped for male and female respondents of five cycles of the Canadian Community Health Survey (CCHS. The micro areas were 2006 Census Dissemination Areas, with an average population of 400–700 people. Two individual-level models were specified: one controlled for survey cycle and age group (model 1, and one controlled for survey cycle, age group and micro area median household income (model 2. Post-stratification was used to derive micro area behavioural risk factor estimates weighted to the population structure. SaTScan analyses were conducted on the granular, postal-code level CCHS data to corroborate findings of elevated prevalence. Results Current smoking was elevated in two urban areas for both sexes (Sarnia and Windsor, and an additional small community (Chatham for males only. Areas of excess bodyweight were prevalent in an urban core (Windsor among males, but not females. Precision of the posterior post-stratified current smoking estimates was improved in model 2, as indicated by narrower credible intervals and a lower coefficient of variation. For excess bodyweight, both models had similar precision. Aggregation of the micro area estimates to CCHS design-based estimates validated the findings. Conclusions This is among the first studies to apply a full Bayesian model to complex

  4. Estimation of population doses from diagnostic medical examinations in Japan, 1974. III. Per caput mean marrow dose and leukemia significant dose

    Energy Technology Data Exchange (ETDEWEB)

    Hashizume, T; Maruyama, T; Kumamoto, Y [National Inst. of Radiological Sciences, Chiba (Japan)

    1976-03-01

    The mean per capita marrow dose and leukemia-significant dose from radiographic and fluoroscopic examinations in Japan have been estimated based on a 1974 nation wide survey of randomly sampled hospitals and clinics. To determine the mean marrow dose to an individual from a certain exposure of a given type of examination, the active marrow in the whole body was divided into 119 parts for an adult and 103 for a child. Dosimetric points on which the individual marrow doses were determined were set up in the center of each marrow part. The individual marrow doses at the dosimetric points in the beams of practical diagnostic x-rays were calculated on the basis of the exposure data on the patients selected in the nation wide survey, using depth dose curves experimentally determined for diagnostic x-rays. The mean individual marrow dose was averaged over the active marrow by summing, for each dosimetric point, the product of the fraction of active marrow exposed and the individual marrow dose at the dosimetric point. The leukemia significant dose was calculated by adopting a weighting factor that is, a leukemia significant factor. The factor was determined from the shape of the time-incidence curve for radiation-induced leukemia from the Hiroshima A-bomb and from the survival statistics for the average population. The resultant mean per capita marrow dose from radiographic and fluoroscopic examination was 37.0 and 70.0 mrad/person/year, respectively, with a total of 107.05 mrad/person/year. The leukemia significant dose was 32.1 mrad/person/year for radiographic examination and 61.2 mrad/person/year, with a total of 93.3. These values were compared with those of 1960 and 1969.

  5. OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE.

    Science.gov (United States)

    Xie, Xianchao; Kou, S C; Brown, Lawrence

    2016-03-01

    This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results.

  6. Pareto-optimal estimates that constrain mean California precipitation change

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  7. A Theoretically Consistent Method for Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2014-01-01

    We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...

  8. Testing a statistical method of global mean palotemperature estimations in a long climate simulation

    Energy Technology Data Exchange (ETDEWEB)

    Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2001-07-01

    Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)

  9. ESTIMATING PHOTOMETRIC REDSHIFTS OF QUASARS VIA THE k-NEAREST NEIGHBOR APPROACH BASED ON LARGE SURVEY DATABASES

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yanxia; Ma He; Peng Nanbo; Zhao Yongheng [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, 100012 Beijing (China); Wu Xuebing, E-mail: zyx@bao.ac.cn [Department of Astronomy, Peking University, 100871 Beijing (China)

    2013-08-01

    We apply one of the lazy learning methods, the k-nearest neighbor (kNN) algorithm, to estimate the photometric redshifts of quasars based on various data sets from the Sloan Digital Sky Survey (SDSS), the UKIRT Infrared Deep Sky Survey (UKIDSS), and the Wide-field Infrared Survey Explorer (WISE; the SDSS sample, the SDSS-UKIDSS sample, the SDSS-WISE sample, and the SDSS-UKIDSS-WISE sample). The influence of the k value and different input patterns on the performance of kNN is discussed. kNN performs best when k is different with a special input pattern for a special data set. The best result belongs to the SDSS-UKIDSS-WISE sample. The experimental results generally show that the more information from more bands, the better performance of photometric redshift estimation with kNN. The results also demonstrate that kNN using multiband data can effectively solve the catastrophic failure of photometric redshift estimation, which is met by many machine learning methods. Compared with the performance of various other methods of estimating the photometric redshifts of quasars, kNN based on KD-Tree shows superiority, exhibiting the best accuracy.

  10. ESTIMATING PHOTOMETRIC REDSHIFTS OF QUASARS VIA THE k-NEAREST NEIGHBOR APPROACH BASED ON LARGE SURVEY DATABASES

    International Nuclear Information System (INIS)

    Zhang Yanxia; Ma He; Peng Nanbo; Zhao Yongheng; Wu Xuebing

    2013-01-01

    We apply one of the lazy learning methods, the k-nearest neighbor (kNN) algorithm, to estimate the photometric redshifts of quasars based on various data sets from the Sloan Digital Sky Survey (SDSS), the UKIRT Infrared Deep Sky Survey (UKIDSS), and the Wide-field Infrared Survey Explorer (WISE; the SDSS sample, the SDSS-UKIDSS sample, the SDSS-WISE sample, and the SDSS-UKIDSS-WISE sample). The influence of the k value and different input patterns on the performance of kNN is discussed. kNN performs best when k is different with a special input pattern for a special data set. The best result belongs to the SDSS-UKIDSS-WISE sample. The experimental results generally show that the more information from more bands, the better performance of photometric redshift estimation with kNN. The results also demonstrate that kNN using multiband data can effectively solve the catastrophic failure of photometric redshift estimation, which is met by many machine learning methods. Compared with the performance of various other methods of estimating the photometric redshifts of quasars, kNN based on KD-Tree shows superiority, exhibiting the best accuracy.

  11. The Impact of Survey and Response Modes on Current Smoking Prevalence Estimates Using TUS-CPS: 1992-2003

    Directory of Open Access Journals (Sweden)

    Julia Soulakova

    2009-12-01

    Full Text Available This study identified whether survey administration mode (telephone or in-person and respondent type (self or proxy result in discrepant prevalence of current smoking in the adult U.S. population, while controlling for key sociodemographic characteristics and longitudinal changes of smoking prevalence over the 11-year period from 1992-2003. We used a multiple logistic regression analysis with replicate weights to model the current smoking status logit as a function of a number of covariates. The final model included individual- and family-level sociodemographic characteristics, survey attributes, and multiple two-way interactions of survey mode and respondent type with other covariates. The respondent type is a significant predictor of current smoking prevalence and the magnitude of the difference depends on the age, sex, and education of the person whose smoking status is being reported. Furthermore, the survey mode has significant interactions with survey year, sex, and age. We conclude that using an overall unadjusted estimate of the current smoking prevalence may result in underestimating the current smoking rate when conducting proxy or telephone interviews especially for some sub-populations, such as young adults. We propose that estimates could be improved if more detailed information regarding the respondent type and survey administration mode characteristics were considered in addition to commonly used survey year and sociodemographic characteristics. This information is critical given that future surveillance is moving toward more complex designs. Thus, adjustment of estimates should be contemplated when comparing current smoking prevalence results within a given survey series with major changes in methodology over time and between different surveys using various modes and respondent types.

  12. The transition to early fatherhood: National estimates based on multiple surveys

    Directory of Open Access Journals (Sweden)

    H. Elizabeth Peters

    2008-04-01

    Full Text Available This study provides systematic information about the prevalence of early male fertility and the relationship between family background characteristics and early parenthood across three widely used data sources: the 1979 and 1997 National Longitudinal Surveys of Youth and the 2002 National Survey of Family Growth. We provide descriptive statistics on early fertility by age, sex, race, cohort, and data set. Because each data set includes birth cohorts with varying early fertility rates, prevalence estimates for early male fertility are relatively similar across data sets. Associations between background characteristics and early fertility in regression models are less consistent across data sets. We discuss the implications of these findings for scholars doing research on early male fertility.

  13. Methods to estimate annual mean spring discharge to the Snake River between Milner Dam and King Hill, Idaho

    Science.gov (United States)

    Kjelstrom, L.C.

    1995-01-01

    Many individual springs and groups of springs discharge water from volcanic rocks that form the north canyon wall of the Snake River between Milner Dam and King Hill. Previous estimates of annual mean discharge from these springs have been used to understand the hydrology of the eastern part of the Snake River Plain. Four methods that were used in previous studies or developed to estimate annual mean discharge since 1902 were (1) water-budget analysis of the Snake River; (2) correlation of water-budget estimates with discharge from 10 index springs; (3) determination of the combined discharge from individual springs or groups of springs by using annual discharge measurements of 8 springs, gaging-station records of 4 springs and 3 sites on the Malad River, and regression equations developed from 5 of the measured springs; and (4) a single regression equation that correlates gaging-station records of 2 springs with historical water-budget estimates. Comparisons made among the four methods of estimating annual mean spring discharges from 1951 to 1959 and 1963 to 1980 indicated that differences were about equivalent to a measurement error of 2 to 3 percent. The method that best demonstrates the response of annual mean spring discharge to changes in ground-water recharge and discharge is method 3, which combines the measurements and regression estimates of discharge from individual springs.

  14. Simple method to estimate mean heart dose from Hodgkin lymphoma radiation therapy according to simulation X-rays.

    Science.gov (United States)

    van Nimwegen, Frederika A; Cutter, David J; Schaapveld, Michael; Rutten, Annemarieke; Kooijman, Karen; Krol, Augustinus D G; Janus, Cécile P M; Darby, Sarah C; van Leeuwen, Flora E; Aleman, Berthe M P

    2015-05-01

    To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case-control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor-intensive representative CT-based method. This simpler method may produce a

  15. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    Energy Technology Data Exchange (ETDEWEB)

    Nimwegen, Frederika A. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Cutter, David J. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford (United Kingdom); Schaapveld, Michael [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Rutten, Annemarieke [Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Kooijman, Karen [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Krol, Augustinus D.G. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Janus, Cécile P.M. [Department of Radiation Oncology, Erasmus MC Cancer Center, Rotterdam (Netherlands); Darby, Sarah C. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Leeuwen, Flora E. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Aleman, Berthe M.P., E-mail: b.aleman@nki.nl [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2015-05-01

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor

  16. Statistical Estimators Using Jointly Administrative and Survey Data to Produce French Structural Business Statistics

    Directory of Open Access Journals (Sweden)

    Brion Philippe

    2015-12-01

    Full Text Available Using as much administrative data as possible is a general trend among most national statistical institutes. Different kinds of administrative sources, from tax authorities or other administrative bodies, are very helpful material in the production of business statistics. However, these sources often have to be completed by information collected through statistical surveys. This article describes the way Insee has implemented such a strategy in order to produce French structural business statistics. The originality of the French procedure is that administrative and survey variables are used jointly for the same enterprises, unlike the majority of multisource systems, in which the two kinds of sources generally complement each other for different categories of units. The idea is to use, as much as possible, the richness of the administrative sources combined with the timeliness of a survey, even if the latter is conducted only on a sample of enterprises. One main issue is the classification of enterprises within the NACE nomenclature, which is a cornerstone variable in producing the breakdown of the results by industry. At a given date, two values of the corresponding code may coexist: the value of the register, not necessarily up to date, and the value resulting from the data collected via the survey, but only from a sample of enterprises. Using all this information together requires the implementation of specific statistical estimators combining some properties of the difference estimators with calibration techniques. This article presents these estimators, as well as their statistical properties, and compares them with those of other methods.

  17. On the real-time estimation of the wheel-rail contact force by means of a new nonlinear estimator design model

    Science.gov (United States)

    Strano, Salvatore; Terzo, Mario

    2018-05-01

    The dynamics of the railway vehicles is strongly influenced by the interaction between the wheel and the rail. This kind of contact is affected by several conditioning factors such as vehicle speed, wear, adhesion level and, moreover, it is nonlinear. As a consequence, the modelling and the observation of this kind of phenomenon are complex tasks but, at the same time, they constitute a fundamental step for the estimation of the adhesion level or for the vehicle condition monitoring. This paper presents a novel technique for the real time estimation of the wheel-rail contact forces based on an estimator design model that takes into account the nonlinearities of the interaction by means of a fitting model functional to reproduce the contact mechanics in a wide range of slip and to be easily integrated in a complete model based estimator for railway vehicle.

  18. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  19. A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions

    Science.gov (United States)

    Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver

    2016-01-01

    Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities

  20. Comparisons of Means for Estimating Sea States from an Advancing Large Container Ship

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Andersen, Ingrid Marie Vincent; Koning, Jos

    2013-01-01

    to ship-wave interactions in a seaway. In the paper, sea state estimates are produced by three means: the wave buoy analogy, relying on shipboard response measurements, a wave radar system, and a system providing the instantaneous wave height. The presented results show that for the given data, recorded...

  1. ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition.

    Science.gov (United States)

    Koslicki, David; Chatterjee, Saikat; Shahrivar, Damon; Walker, Alan W; Francis, Suzanna C; Fraser, Louise J; Vehkaperä, Mikko; Lan, Yueheng; Corander, Jukka

    2015-01-01

    Estimation of bacterial community composition from high-throughput sequenced 16S rRNA gene amplicons is a key task in microbial ecology. Since the sequence data from each sample typically consist of a large number of reads and are adversely impacted by different levels of biological and technical noise, accurate analysis of such large datasets is challenging. There has been a recent surge of interest in using compressed sensing inspired and convex-optimization based methods to solve the estimation problem for bacterial community composition. These methods typically rely on summarizing the sequence data by frequencies of low-order k-mers and matching this information statistically with a taxonomically structured database. Here we show that the accuracy of the resulting community composition estimates can be substantially improved by aggregating the reads from a sample with an unsupervised machine learning approach prior to the estimation phase. The aggregation of reads is a pre-processing approach where we use a standard K-means clustering algorithm that partitions a large set of reads into subsets with reasonable computational cost to provide several vectors of first order statistics instead of only single statistical summarization in terms of k-mer frequencies. The output of the clustering is then processed further to obtain the final estimate for each sample. The resulting method is called Aggregation of Reads by K-means (ARK), and it is based on a statistical argument via mixture density formulation. ARK is found to improve the fidelity and robustness of several recently introduced methods, with only a modest increase in computational complexity. An open source, platform-independent implementation of the method in the Julia programming language is freely available at https://github.com/dkoslicki/ARK. A Matlab implementation is available at http://www.ee.kth.se/ctsoftware.

  2. Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers and Improve Data-Limited Stock Assessments. This biosampling project...

  3. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

    Science.gov (United States)

    Brand, Andrew; Bradley, Michael T

    2016-02-01

    Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

  4. Effect of Antihypertensive Therapy on SCORE-Estimated Total Cardiovascular Risk: Results from an Open-Label, Multinational Investigation—The POWER Survey

    Directory of Open Access Journals (Sweden)

    Guy De Backer

    2013-01-01

    Full Text Available Background. High blood pressure is a substantial risk factor for cardiovascular disease. Design & Methods. The Physicians' Observational Work on patient Education according to their vascular Risk (POWER survey was an open-label investigation of eprosartan-based therapy (EBT for control of high blood pressure in primary care centers in 16 countries. A prespecified element of this research was appraisal of the impact of EBT on estimated 10-year risk of a fatal cardiovascular event as determined by the Systematic Coronary Risk Evaluation (SCORE model. Results. SCORE estimates of CVD risk were obtained at baseline from 12,718 patients in 15 countries (6504 men and from 9577 patients at 6 months. During EBT mean (±SD systolic/diastolic blood pressures declined from 160.2 ± 13.7/94.1 ± 9.1 mmHg to 134.5 ± 11.2/81.4 ± 7.4 mmHg. This was accompanied by a 38% reduction in mean SCORE-estimated CVD risk and an improvement in SCORE risk classification of one category or more in 3506 patients (36.6%. Conclusion. Experience in POWER affirms that (a effective pharmacological control of blood pressure is feasible in the primary care setting and is accompanied by a reduction in total CVD risk and (b the SCORE instrument is effective in this setting for the monitoring of total CVD risk.

  5. Effect of Antihypertensive Therapy on SCORE-Estimated Total Cardiovascular Risk: Results from an Open-Label, Multinational Investigation—The POWER Survey

    Science.gov (United States)

    De Backer, Guy; Petrella, Robert J.; Goudev, Assen R.; Radaideh, Ghazi Ahmad; Rynkiewicz, Andrzej; Pathak, Atul

    2013-01-01

    Background. High blood pressure is a substantial risk factor for cardiovascular disease. Design & Methods. The Physicians' Observational Work on patient Education according to their vascular Risk (POWER) survey was an open-label investigation of eprosartan-based therapy (EBT) for control of high blood pressure in primary care centers in 16 countries. A prespecified element of this research was appraisal of the impact of EBT on estimated 10-year risk of a fatal cardiovascular event as determined by the Systematic Coronary Risk Evaluation (SCORE) model. Results. SCORE estimates of CVD risk were obtained at baseline from 12,718 patients in 15 countries (6504 men) and from 9577 patients at 6 months. During EBT mean (±SD) systolic/diastolic blood pressures declined from 160.2 ± 13.7/94.1 ± 9.1 mmHg to 134.5 ± 11.2/81.4 ± 7.4 mmHg. This was accompanied by a 38% reduction in mean SCORE-estimated CVD risk and an improvement in SCORE risk classification of one category or more in 3506 patients (36.6%). Conclusion. Experience in POWER affirms that (a) effective pharmacological control of blood pressure is feasible in the primary care setting and is accompanied by a reduction in total CVD risk and (b) the SCORE instrument is effective in this setting for the monitoring of total CVD risk. PMID:23997946

  6. On the choice of statistical models for estimating occurrence and extinction from animal surveys

    Science.gov (United States)

    Dorazio, R.M.

    2007-01-01

    In surveys of natural animal populations the number of animals that are present and available to be detected at a sample location is often low, resulting in few or no detections. Low detection frequencies are especially common in surveys of imperiled species; however, the choice of sampling method and protocol also may influence the size of the population that is vulnerable to detection. In these circumstances, probabilities of animal occurrence and extinction will generally be estimated more accurately if the models used in data analysis account for differences in abundance among sample locations and for the dependence between site-specific abundance and detection. Simulation experiments are used to illustrate conditions wherein these types of models can be expected to outperform alternative estimators of population site occupancy and extinction. ?? 2007 by the Ecological Society of America.

  7. Studying dark energy with galaxy cluster surveys

    International Nuclear Information System (INIS)

    Mohr, Joseph J.; O'Shea, Brian; Evrard, August E.; Bialek, John; Haiman, Zoltan

    2003-01-01

    Galaxy cluster surveys provide a powerful means of studying the density and nature of the dark energy. The redshift distribution of detected clusters in a deep, large solid angle SZE or X-ray survey is highly sensitive to the dark energy equation of state. Accurate constraints at the 5% level on the dark energy equation of state require that systematic biases in the mass estimators must be controlled at better than the ∼10% level. Observed regularity in the cluster population and the availability of multiple, independent mass estimators suggests these precise measurements are possible. Using hydrodynamical simulations that include preheating, we show that the level of preheating required to explain local galaxy cluster structure has a dramatic effect on X-ray cluster surveys, but only a mild effect on SZE surveys. This suggests that SZE surveys may be optimal for cosmology while X-ray surveys are well suited for studies of the thermal history of the intracluster medium

  8. Wind energy survey in Ethiopia

    Energy Technology Data Exchange (ETDEWEB)

    Wolde-Ghiorgis, W.

    1988-01-01

    The results are presented of a wind energy survey made for one country in Eastern Africa (Ethiopia) using mean wind speed data obtained from meteorological observations. The paper also presents reasons for expecting the calculated energy estimates to be potentially useful around most of the sites considered in the study.

  9. Water quality of storm runoff and comparison of procedures for estimating storm-runoff loads, volume, event-mean concentrations, and the mean load for a storm for selected properties and constituents for Colorado Springs, southeastern Colorado, 1992

    Science.gov (United States)

    Von Guerard, Paul; Weiss, W.B.

    1995-01-01

    The U.S. Environmental Protection Agency requires that municipalities that have a population of 100,000 or greater obtain National Pollutant Discharge Elimination System permits to characterize the quality of their storm runoff. In 1992, the U.S. Geological Survey, in cooperation with the Colorado Springs City Engineering Division, began a study to characterize the water quality of storm runoff and to evaluate procedures for the estimation of storm-runoff loads, volume and event-mean concentrations for selected properties and constituents. Precipitation, streamflow, and water-quality data were collected during 1992 at five sites in Colorado Springs. Thirty-five samples were collected, seven at each of the five sites. At each site, three samples were collected for permitting purposes; two of the samples were collected during rainfall runoff, and one sample was collected during snowmelt runoff. Four additional samples were collected at each site to obtain a large enough sample size to estimate storm-runoff loads, volume, and event-mean concentrations for selected properties and constituents using linear-regression procedures developed using data from the Nationwide Urban Runoff Program (NURP). Storm-water samples were analyzed for as many as 186 properties and constituents. The constituents measured include total-recoverable metals, vola-tile-organic compounds, acid-base/neutral organic compounds, and pesticides. Storm runoff sampled had large concentrations of chemical oxygen demand and 5-day biochemical oxygen demand. Chemical oxygen demand ranged from 100 to 830 milligrams per liter, and 5.-day biochemical oxygen demand ranged from 14 to 260 milligrams per liter. Total-organic carbon concentrations ranged from 18 to 240 milligrams per liter. The total-recoverable metals lead and zinc had the largest concentrations of the total-recoverable metals analyzed. Concentrations of lead ranged from 23 to 350 micrograms per liter, and concentrations of zinc ranged from 110

  10. Surveying Drifting Icebergs and Ice Islands: Deterioration Detection and Mass Estimation with Aerial Photogrammetry and Laser Scanning

    Directory of Open Access Journals (Sweden)

    Anna J. Crawford

    2018-04-01

    Full Text Available Icebergs and ice islands (large, tabular icebergs are challenging targets to survey due to their size, mobility, remote locations, and potentially difficult environmental conditions. Here, we assess the precision and utility of aerial photography surveying with structure-from-motion multi-view stereo photogrammetry processing (SfM and vessel-based terrestrial laser scanning (TLS for iceberg deterioration detection and mass estimation. For both techniques, we determine the minimum amount of change required to reliably resolve iceberg deterioration, the deterioration detection threshold (DDT, using triplicate surveys of two iceberg survey targets. We also calculate their relative uncertainties for iceberg mass estimation. The quality of deployed Global Positioning System (GPS units that were used for drift correction and scale assignment was a major determinant of point cloud precision. When dual-frequency GPS receivers were deployed, DDT values of 2.5 and 0.40 m were calculated for the TLS and SfM point clouds, respectively. In contrast, values of 6.6 and 3.4 m were calculated when tracking beacons with lower-quality GPS were used. The SfM dataset was also more precise when used for iceberg mass estimation, and we recommend further development of this technique for iceberg-related end-uses.

  11. Prevalence estimates of chronic kidney disease in Canada: results of a nationally representative survey

    Science.gov (United States)

    Arora, Paul; Vasa, Priya; Brenner, Darren; Iglar, Karl; McFarlane, Phil; Morrison, Howard; Badawi, Alaa

    2013-01-01

    Background: Chronic kidney disease is an important risk factor for death and cardiovascular-related morbidity, but estimates to date of its prevalence in Canada have generally been extrapolated from the prevalence of end-stage renal disease. We used direct measures of kidney function collected from a nationally representative survey population to estimate the prevalence of chronic kidney disease among Canadian adults. Methods: We examined data for 3689 adult participants of cycle 1 of the Canadian Health Measures Survey (2007–2009) for the presence of chronic kidney disease. We also calculated the age-standardized prevalence of cardiovascular risk factors by chronic kidney disease group. We cross-tabulated the estimated glomerular filtration rate (eGFR) with albuminuria status. Results: The prevalence of chronic kidney disease during the period 2007–2009 was 12.5%, representing about 3 million Canadian adults. The estimated prevalence of stage 3–5 disease was 3.1% (0.73 million adults) and albuminuria 10.3% (2.4 million adults). The prevalence of diabetes, hypertension and hypertriglyceridemia were all significantly higher among adults with chronic kidney disease than among those without it. The prevalence of albuminuria was high, even among those whose eGFR was 90 mL/min per 1.73 m2 or greater (10.1%) and those without diabetes or hypertension (9.3%). Awareness of kidney dysfunction among adults with stage 3–5 chronic kidney disease was low (12.0%). Interpretation: The prevalence of kidney dysfunction was substantial in the survey population, including individuals without hypertension or diabetes, conditions most likely to prompt screening for kidney dysfunction. These findings highlight the potential for missed opportunities for early intervention and secondary prevention of chronic kidney disease. PMID:23649413

  12. Estimating trends in the global mean temperature record

    Science.gov (United States)

    Poppick, Andrew; Moyer, Elisabeth J.; Stein, Michael L.

    2017-06-01

    Given uncertainties in physical theory and numerical climate simulations, the historical temperature record is often used as a source of empirical information about climate change. Many historical trend analyses appear to de-emphasize physical and statistical assumptions: examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for internal variability in nonparametric rather than parametric ways. However, given a limited data record and the presence of internal variability, estimating radiatively forced temperature trends in the historical record necessarily requires some assumptions. Ostensibly empirical methods can also involve an inherent conflict in assumptions: they require data records that are short enough for naive trend models to be applicable, but long enough for long-timescale internal variability to be accounted for. In the context of global mean temperatures, empirical methods that appear to de-emphasize assumptions can therefore produce misleading inferences, because the trend over the twentieth century is complex and the scale of temporal correlation is long relative to the length of the data record. We illustrate here how a simple but physically motivated trend model can provide better-fitting and more broadly applicable trend estimates and can allow for a wider array of questions to be addressed. In particular, the model allows one to distinguish, within a single statistical framework, between uncertainties in the shorter-term vs. longer-term response to radiative forcing, with implications not only on historical trends but also on uncertainties in future projections. We also investigate the consequence on inferred uncertainties of the choice of a statistical description of internal variability. While nonparametric methods may seem to avoid making explicit assumptions, we demonstrate how even misspecified parametric statistical methods, if attuned to the

  13. Estimation of geographic variation in human papillomavirus vaccine uptake in men and women: an online survey using facebook recruitment.

    Science.gov (United States)

    Nelson, Erik J; Hughes, John; Oakes, J Michael; Pankow, James S; Kulasingam, Shalini L

    2014-09-01

    Federally funded surveys of human papillomavirus (HPV) vaccine uptake are important for pinpointing geographically based health disparities. Although national and state level data are available, local (ie, county and postal code level) data are not due to small sample sizes, confidentiality concerns, and cost. Local level HPV vaccine uptake data may be feasible to obtain by targeting specific geographic areas through social media advertising and recruitment strategies, in combination with online surveys. Our goal was to use Facebook-based recruitment and online surveys to estimate local variation in HPV vaccine uptake among young men and women in Minnesota. From November 2012 to January 2013, men and women were recruited via a targeted Facebook advertisement campaign to complete an online survey about HPV vaccination practices. The Facebook advertisements were targeted to recruit men and women by location (25 mile radius of Minneapolis, Minnesota, United States), age (18-30 years), and language (English). Of the 2079 men and women who responded to the Facebook advertisements and visited the study website, 1003 (48.2%) enrolled in the study and completed the survey. The average advertising cost per completed survey was US $1.36. Among those who reported their postal code, 90.6% (881/972) of the participants lived within the previously defined geographic study area. Receipt of 1 dose or more of HPV vaccine was reported by 65.6% women (351/535), and 13.0% (45/347) of men. These results differ from previously reported Minnesota state level estimates (53.8% for young women and 20.8% for young men) and from national estimates (34.5% for women and 2.3% for men). This study shows that recruiting a representative sample of young men and women based on county and postal code location to complete a survey on HPV vaccination uptake via the Internet is a cost-effective and feasible strategy. This study also highlights the need for local estimates to assess the variation in HPV

  14. Estimation of Geographic Variation in Human Papillomavirus Vaccine Uptake in Men and Women: An Online Survey Using Facebook Recruitment

    Science.gov (United States)

    Hughes, John; Oakes, J Michael; Pankow, James S; Kulasingam, Shalini L

    2014-01-01

    Background Federally funded surveys of human papillomavirus (HPV) vaccine uptake are important for pinpointing geographically based health disparities. Although national and state level data are available, local (ie, county and postal code level) data are not due to small sample sizes, confidentiality concerns, and cost. Local level HPV vaccine uptake data may be feasible to obtain by targeting specific geographic areas through social media advertising and recruitment strategies, in combination with online surveys. Objective Our goal was to use Facebook-based recruitment and online surveys to estimate local variation in HPV vaccine uptake among young men and women in Minnesota. Methods From November 2012 to January 2013, men and women were recruited via a targeted Facebook advertisement campaign to complete an online survey about HPV vaccination practices. The Facebook advertisements were targeted to recruit men and women by location (25 mile radius of Minneapolis, Minnesota, United States), age (18-30 years), and language (English). Results Of the 2079 men and women who responded to the Facebook advertisements and visited the study website, 1003 (48.2%) enrolled in the study and completed the survey. The average advertising cost per completed survey was US $1.36. Among those who reported their postal code, 90.6% (881/972) of the participants lived within the previously defined geographic study area. Receipt of 1 dose or more of HPV vaccine was reported by 65.6% women (351/535), and 13.0% (45/347) of men. These results differ from previously reported Minnesota state level estimates (53.8% for young women and 20.8% for young men) and from national estimates (34.5% for women and 2.3% for men). Conclusions This study shows that recruiting a representative sample of young men and women based on county and postal code location to complete a survey on HPV vaccination uptake via the Internet is a cost-effective and feasible strategy. This study also highlights the need

  15. A survey of kernel-type estimators for copula and their applications

    Science.gov (United States)

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  16. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    Science.gov (United States)

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Application of airborne gamma spectrometric survey data to estimating terrestrial gamma-ray dose rates: An example in California

    International Nuclear Information System (INIS)

    Wollenberg, H.A.; Revzan, K.L.; Smith, A.R.

    1992-01-01

    The authors examine the applicability of radioelement data from the National Aerial Radiometric Reconnaissance (NARR) to estimate terrestrial gamma-ray absorbed dose rates, by comparing dose rates calculated from aeroradiometric surveys of U, Th, and K concentrations in 1 x 2 degree quadrangles with dose rates calculated from a radiogeologic data base and the distribution of lithologies in California. Gamma-ray dose rates increase generally from north to south following lithological trends. Low values of 25--30 nG/h occur in the northernmost quadrangles where low-radioactivity basaltic and ultramafic rocks predominate. Dose rates then increase southward due to the preponderance of clastic sediments and basic volcanics of the Franciscan Formation and Sierran metamorphics in north central and central California, and to increasing exposure southward of the Sierra Nevada batholith, Tertiary marine sedimentary rocks, intermediate to acidic volcanics, and granitic rocks of the Coast Ranges. High values, to 100 nGy/h occur in southeastern California, due primarily to the presence of high-radioactivity Precambrian and pre Cenozoic metamorphic rocks. Lithologic-based estimates of mean dose rates in the quadrangles generally match those from aeroradiometric data, with statewide means of 63 and 60 nGy/h, respectively. These are intermediate between a population-weighted global average of 51 nGy/h and a weighted continental average of 70 nGy/h, based on the global distribution of rock types. The concurrence of lithologically- and aeroradiometrically- determined dose rates in California, with its varied geology and topography encompassing settings representative of the continents, indicates that the NARR data are applicable to estimates of terrestrial absorbed dose rates from natural gamma emitters

  18. Photometric redshifts for the next generation of deep radio continuum surveys - II. Gaussian processes and hybrid estimates

    Science.gov (United States)

    Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.

    2018-04-01

    Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared, X-ray and optically selected AGN - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGN are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole co-evolution and for cosmological studies.

  19. Mechanisms Controlling Global Mean Sea Surface Temperature Determined From a State Estimate

    Science.gov (United States)

    Ponte, R. M.; Piecuch, C. G.

    2018-04-01

    Global mean sea surface temperature (T¯) is a variable of primary interest in studies of climate variability and change. The temporal evolution of T¯ can be influenced by surface heat fluxes (F¯) and by diffusion (D¯) and advection (A¯) processes internal to the ocean, but quantifying the contribution of these different factors from data alone is prone to substantial uncertainties. Here we derive a closed T¯ budget for the period 1993-2015 based on a global ocean state estimate, which is an exact solution of a general circulation model constrained to most extant ocean observations through advanced optimization methods. The estimated average temperature of the top (10-m thick) level in the model, taken to represent T¯, shows relatively small variability at most time scales compared to F¯, D¯, or A¯, reflecting the tendency for largely balancing effects from all the latter terms. The seasonal cycle in T¯ is mostly determined by small imbalances between F¯ and D¯, with negligible contributions from A¯. While D¯ seems to simply damp F¯ at the annual period, a different dynamical role for D¯ at semiannual period is suggested by it being larger than F¯. At periods longer than annual, A¯ contributes importantly to T¯ variability, pointing to the direct influence of the variable ocean circulation on T¯ and mean surface climate.

  20. Estimating the abundance of the Southern Hudson Bay polar bear subpopulation with aerial surveys

    Science.gov (United States)

    Obbard, Martyn E.; Stapleton, Seth P.; Middel, Kevin R.; Thibault, Isabelle; Brodeur, Vincent; Jutras, Charles

    2015-01-01

    The Southern Hudson Bay (SH) polar bear subpopulation occurs at the southern extent of the species’ range. Although capture–recapture studies indicate abundance was likely unchanged between 1986 and 2005, declines in body condition and survival occurred during the period, possibly foreshadowing a future decrease in abundance. To obtain a current estimate of abundance, we conducted a comprehensive line transect aerial survey of SH during 2011–2012. We stratified the study site by anticipated densities and flew coastal contour transects and systematically spaced inland transects in Ontario and on Akimiski Island and large offshore islands in 2011. Data were collected with double-observer and distance sampling protocols. We surveyed small islands in James Bay and eastern Hudson Bay and flew a comprehensive transect along the Québec coastline in 2012. We observed 667 bears in Ontario and on Akimiski Island and nearby islands in 2011, and we sighted 80 bears on offshore islands during 2012. Mark–recapture distance sampling and sight–resight models yielded an estimate of 860 (SE = 174) for the 2011 study area. Our estimate of abundance for the entire SH subpopulation (943; SE = 174) suggests that abundance is unlikely to have changed significantly since 1986. However, this result should be interpreted cautiously because of the methodological differences between historical studies (physical capture–recapture) and this survey. A conservative management approach is warranted given previous increases in duration of the ice-free season, which are predicted to continue in the future, and previously documented declines in body condition and vital rates.

  1. Estimated rate of agricultural injury: the Korean Farmers’ Occupational Disease and Injury Survey

    Science.gov (United States)

    2014-01-01

    Objectives This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. Methods The first Korean Farmers’ Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. Results We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45–0.45) compared to males. However, the odds of injury among farmers aged 50–59 (OR: 1.53, 95% CI = 1.46–1.60), 60–69 (OR: 1.45, 95% CI = 1.39–1.51), and ≥70 (OR: 1.94, 95% CI = 1.86–2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Conclusions Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required. PMID:24808945

  2. Estimated rate of agricultural injury: the Korean Farmers' Occupational Disease and Injury Survey.

    Science.gov (United States)

    Chae, Hyeseon; Min, Kyungdoo; Youn, Kanwoo; Park, Jinwoo; Kim, Kyungran; Kim, Hyocher; Lee, Kyungsuk

    2014-01-01

    This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. The first Korean Farmers' Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45-0.45) compared to males. However, the odds of injury among farmers aged 50-59 (OR: 1.53, 95% CI = 1.46-1.60), 60-69 (OR: 1.45, 95% CI = 1.39-1.51), and ≥70 (OR: 1.94, 95% CI = 1.86-2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required.

  3. Population estimates of extended family structure and size.

    Science.gov (United States)

    Garceau, Anne; Wideroff, Louise; McNeel, Timothy; Dunn, Marsha; Graubard, Barry I

    2008-01-01

    Population-based estimates of biological family size can be useful for planning genetic studies, assessing how distributions of relatives affect disease associations with family history and estimating prevalence of potential family support. Mean family size per person is estimated from a population-based telephone survey (n = 1,019). After multivariate adjustment for demographic variables, older and non-White respondents reported greater mean numbers of total, first- and second-degree relatives. Females reported more total and first-degree relatives, while less educated respondents reported more second-degree relatives. Demographic differences in family size have implications for genetic research. Therefore, periodic collection of family structure data in representative populations would be useful. Copyright 2008 S. Karger AG, Basel.

  4. The mean intensity of radiation at 2 microns in the solar neighborhood

    International Nuclear Information System (INIS)

    Jura, M.

    1979-01-01

    Consideration is given to the value of the mean intensity at 2 microns in the solar neighborhood, and it is found that it is likely to be a factor of four greater than previously estimated on theoretical grounds. It is noted however, that the estimate does agree with a reasonable extrapolation of the results of the survey of the Galactic plane by the Japanese group. It is concluded that the mean intensity in the solar neighborhood therefore probably peaks somewhat longward of 1 micron, and that this result is important for understanding the temperature of interstellar dust and the intensity of the far infrared background. This means specifically that dark clouds probably emit significantly more far infrared radiation than previously predicted

  5. Testing Black Market vs. Official PPP: A Pooled Mean Group Estimation Approach

    OpenAIRE

    Goswami, Gour Gobinda; Hossain, Mohammad Zariab

    2013-01-01

    Testing purchasing power parity (PPP) using black market exchange rate data has gained popularity in recent times. It is claimed that black market exchange rate data more often support the PPP than the official exchange rate data. In this study, to assess both the long run stability of exchange rate and the short run dynamics, we employ Pooled Mean Group (PMG) Estimation developed by Pesaran et al. (1999) on eight groups of countries based on different criteria. Using the famous Reinhart and ...

  6. Improved sampling for airborne surveys to estimate wildlife population parameters in the African Savannah

    NARCIS (Netherlands)

    Khaemba, W.; Stein, A.

    2002-01-01

    Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like

  7. On the generalization of linear least mean squares estimation to quantum systems with non-commutative outputs

    Energy Technology Data Exchange (ETDEWEB)

    Amini, Nina H. [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); CNRS, Laboratoire des Signaux et Systemes (L2S) CentraleSupelec, Gif-sur-Yvette (France); Miao, Zibo; Pan, Yu; James, Matthew R. [Australian National University, ARC Centre for Quantum Computation and Communication Technology, Research School of Engineering, Canberra, ACT (Australia); Mabuchi, Hideo [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States)

    2015-12-15

    The purpose of this paper is to study the problem of generalizing the Belavkin-Kalman filter to the case where the classical measurement signal is replaced by a fully quantum non-commutative output signal. We formulate a least mean squares estimation problem that involves a non-commutative system as the filter processing the non-commutative output signal. We solve this estimation problem within the framework of non-commutative probability. Also, we find the necessary and sufficient conditions which make these non-commutative estimators physically realizable. These conditions are restrictive in practice. (orig.)

  8. Address-based versus random-digit-dial surveys: comparison of key health and risk indicators.

    Science.gov (United States)

    Link, Michael W; Battaglia, Michael P; Frankel, Martin R; Osborn, Larry; Mokdad, Ali H

    2006-11-15

    Use of random-digit dialing (RDD) for conducting health surveys is increasingly problematic because of declining participation rates and eroding frame coverage. Alternative survey modes and sampling frames may improve response rates and increase the validity of survey estimates. In a 2005 pilot study conducted in six states as part of the Behavioral Risk Factor Surveillance System, the authors administered a mail survey to selected household members sampled from addresses in a US Postal Service database. The authors compared estimates based on data from the completed mail surveys (n = 3,010) with those from the Behavioral Risk Factor Surveillance System telephone surveys (n = 18,780). The mail survey data appeared reasonably complete, and estimates based on data from the two survey modes were largely equivalent. Differences found, such as differences in the estimated prevalences of binge drinking (mail = 20.3%, telephone = 13.1%) or behaviors linked to human immunodeficiency virus transmission (mail = 7.1%, telephone = 4.2%), were consistent with previous research showing that, for questions about sensitive behaviors, self-administered surveys generally produce higher estimates than interviewer-administered surveys. The mail survey also provided access to cell-phone-only households and households without telephones, which cannot be reached by means of standard RDD surveys.

  9. BED estimates of HIV incidence: resolving the differences, making things simpler.

    Directory of Open Access Journals (Sweden)

    John Hargrove

    Full Text Available Develop a simple method for optimal estimation of HIV incidence using the BED capture enzyme immunoassay.Use existing BED data to estimate mean recency duration, false recency rates and HIV incidence with reference to a fixed time period, T.Compare BED and cohort estimates of incidence referring to identical time frames. Generalize this approach to suggest a method for estimating HIV incidence from any cross-sectional survey.Follow-up and BED analyses of the same, initially HIV negative, cases followed over the same set time period T, produce estimates of the same HIV incidence, permitting the estimation of the BED mean recency period for cases who have been HIV positive for less than T. Follow-up of HIV positive cases over T, similarly, provides estimates of the false-recent rate appropriate for T. Knowledge of these two parameters for a given population allows the estimation of HIV incidence during T by applying the BED method to samples from cross-sectional surveys. An algorithm is derived for providing these estimates, adjusted for the false-recent rate. The resulting estimator is identical to one derived independently using a more formal mathematical analysis. Adjustments improve the accuracy of HIV incidence estimates. Negative incidence estimates result from the use of inappropriate estimates of the false-recent rate and/or from sampling error, not from any error in the adjustment procedure.Referring all estimates of mean recency periods, false-recent rates and incidence estimates to a fixed period T simplifies estimation procedures and allows the development of a consistent method for producing adjusted estimates of HIV incidence of improved accuracy. Unadjusted BED estimates of incidence, based on life-time recency periods, would be both extremely difficult to produce and of doubtful value.

  10. Mobile Phone Surveys for Collecting Population-Level Estimates in Low- and Middle-Income Countries: A Literature Review.

    Science.gov (United States)

    Gibson, Dustin G; Pereira, Amanda; Farrenkopf, Brooke A; Labrique, Alain B; Pariyo, George W; Hyder, Adnan A

    2017-05-05

    National and subnational level surveys are important for monitoring disease burden, prioritizing resource allocation, and evaluating public health policies. As mobile phone access and ownership become more common globally, mobile phone surveys (MPSs) offer an opportunity to supplement traditional public health household surveys. The objective of this study was to systematically review the current landscape of MPSs to collect population-level estimates in low- and middle-income countries (LMICs). Primary and gray literature from 7 online databases were systematically searched for studies that deployed MPSs to collect population-level estimates. Titles and abstracts were screened on primary inclusion and exclusion criteria by two research assistants. Articles that met primary screening requirements were read in full and screened for secondary eligibility criteria. Articles included in review were grouped into the following three categories by their survey modality: (1) interactive voice response (IVR), (2) short message service (SMS), and (3) human operator or computer-assisted telephone interviews (CATI). Data were abstracted by two research assistants. The conduct and reporting of the review conformed to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. A total of 6625 articles were identified through the literature review. Overall, 11 articles were identified that contained 19 MPS (CATI, IVR, or SMS) surveys to collect population-level estimates across a range of topics. MPSs were used in Latin America (n=8), the Middle East (n=1), South Asia (n=2), and sub-Saharan Africa (n=8). Nine articles presented results for 10 CATI surveys (10/19, 53%). Two articles discussed the findings of 6 IVR surveys (6/19, 32%). Three SMS surveys were identified from 2 articles (3/19, 16%). Approximately 63% (12/19) of MPS were delivered to mobile phone numbers collected from previously administered household surveys. The majority of MPS (11

  11. Estimates of the mean alcohol concentration of the spirits, wine, and beer sold in the United States and per capita consumption: 1950 to 2002.

    Science.gov (United States)

    Kerr, William C; Greenfield, Thomas K; Tujague, Jennifer

    2006-09-01

    Estimates of per capita consumption of alcohol in the United States require estimates of the mean alcohol content by volume (%ABV) of the beer, wine, and spirits sold to convert beverage volume to gallons of pure alcohol. The mean %ABV of spirits is estimated for each year from 1950 to 2002 and for each state using the %ABV of major brands and sales of sprits types. The mean %ABV of beer and wine is extrapolated to cover this period based on previous estimates. These mean %ABVs are then applied to alcohol sales figures to calculate new yearly estimates of per capita consumption of beer, wine, spirits, and total alcohol for the United States population aged 15 and older. The mean %ABV for spirits is found to be lower than previous estimates and to vary considerably over time and across states. Resultant per capita consumption estimates indicate that more alcohol was consumed from beer and less from wine and spirits than found in previous estimates. Empirically based calculation of mean %ABV for beer, wine, and spirits sold in the United States results in different and presumably more accurate per capita consumption estimates than heretofore available. Utilization of the new estimates in aggregate time-series and cross-sectional models of alcohol consumption and related outcomes may improve the accuracy and precision of such models.

  12. MEAN OF MEDIAN ABSOLUTE DERIVATION TECHNIQUE MEAN ...

    African Journals Online (AJOL)

    eobe

    development of mean of median absolute derivation technique based on the based on the based on .... of noise mean to estimate the speckle noise variance. Noise mean property ..... Foraging Optimization,” International Journal of. Advanced ...

  13. Measuring the difference in mean willingness to pay when dichotomous choice contingent valuation responses are not independent

    Science.gov (United States)

    Gregory L. Poe; Michael P. Welsh; Patricia A. Champ

    1997-01-01

    Dichotomous choice contingent valuation surveys frequently elicit multiple values in a single questionnaire. If individual responses are correlated across scenarios, the standard approach of estimating willingness to pay (WTP) functions independently for each scenario may result in biased estimates of the significance of the difference in mean WTP values. This paper...

  14. Five Year Mean Surface Chlorophyll Estimates in the Northern Gulf of Mexico for 2005 through 2009

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These images were created by combining the mean surface chlorophyll estimates to produce seasonal representations for winter, spring, summer and fall. Winter...

  15. Pairing call-response surveys and distance sampling for a mammalian carnivore

    Science.gov (United States)

    Hansen, Sara J. K.; Frair, Jacqueline L.; Underwood, Harold B.; Gibbs, James P.

    2015-01-01

    Density estimates accounting for differential animal detectability are difficult to acquire for wide-ranging and elusive species such as mammalian carnivores. Pairing distance sampling with call-response surveys may provide an efficient means of tracking changes in populations of coyotes (Canis latrans), a species of particular interest in the eastern United States. Blind field trials in rural New York State indicated 119-m linear error for triangulated coyote calls, and a 1.8-km distance threshold for call detectability, which was sufficient to estimate a detection function with precision using distance sampling. We conducted statewide road-based surveys with sampling locations spaced ≥6 km apart from June to August 2010. Each detected call (be it a single or group) counted as a single object, representing 1 territorial pair, because of uncertainty in the number of vocalizing animals. From 524 survey points and 75 detections, we estimated the probability of detecting a calling coyote to be 0.17 ± 0.02 SE, yielding a detection-corrected index of 0.75 pairs/10 km2 (95% CI: 0.52–1.1, 18.5% CV) for a minimum of 8,133 pairs across rural New York State. Importantly, we consider this an index rather than true estimate of abundance given the unknown probability of coyote availability for detection during our surveys. Even so, pairing distance sampling with call-response surveys provided a novel, efficient, and noninvasive means of monitoring populations of wide-ranging and elusive, albeit reliably vocal, mammalian carnivores. Our approach offers an effective new means of tracking species like coyotes, one that is readily extendable to other species and geographic extents, provided key assumptions of distance sampling are met.

  16. Estimation of urban residential electricity demand in China using household survey data

    International Nuclear Information System (INIS)

    Zhou, Shaojie; Teng, Fei

    2013-01-01

    This paper uses annual urban household survey data of Sichuan Province from 2007 to 2009 to estimate the income and price elasticities of residential electricity demand, along with the effects of lifestyle-related variables. The empirical results show that in the urban area of Sichuan province, the residential electricity demand is price- and income-inelastic, with price and income elasticities ranging from −0.35 to −0.50 and from 0.14 to 0.33, respectively. Such lifestyle-related variables as demographic variables, dwelling size and holdings of home appliances, are also important determinants of residential electricity demand, especially the latter. These results are robust to a variety of sensitivity tests. The research findings imply that urban residential electricity demand continues to increase with the growth of income. The empirical results have important policy implications for the Multistep Electricity Price, which been adopted in some cities and is expected to be promoted nationwide through the installation of energy-efficient home appliances. - Highlights: • We estimate price and income elasticities in China using household survey data. • The current study is the first such study in China at this level. • Both price and income are inelastic. • Behavior factors have important impact on electricity consumption

  17. Regional scale net radiation estimation by means of Landsat and TERRA/AQUA imagery and GIS modeling

    Science.gov (United States)

    Cristóbal, J.; Ninyerola, M.; Pons, X.; Llorens, P.; Poyatos, R.

    2009-04-01

    Net radiation (Rn) is one of the most important variables for the estimation of surface energy budget and is used for various applications including agricultural meteorology, climate monitoring and weather prediction. Moreover, net radiation is an essential input variable for potential as well as actual evapotranspiration modeling. Nowadays, radiometric measurements provided by Remote Sensing and GIS analysis are the technologies used to compute net radiation at regional scales in a feasible way. In this study we present a regional scale estimation of the daily Rn on clear days, (Catalonia, NE of the Iberian Peninsula), using a set of 22 Landsat images (17 Landsat-5 TM and 5 Landsat-7 ETM+) and 171 TERRA/AQUA images MODIS from 2000 to 2007 period. TERRA/AQUA MODIS images have been downloaded by means of the EOS Gateway. We have selected three different types of products which contain the remote sensing data we have used to model daily Rn: daily LST product, daily calibrated reflectances product and daily atmospheric water vapour product. Landsat-5 TM images have been corrected by means of conventional techniques based on first order polynomials taking into account the effect of land surface relief using a Digital Elevation Model, obtaining an RMS less than 30 m. Radiometric correction of Landsat non-thermal bands has been done following the methodology proposed by Pons and Solé (1994), which allows to reduce the number of undesired artifacts that are due to the effects of the atmosphere or to the differential illumination which is, in turn, due to the time of the day, the location in the Earth and the relief (zones being more illuminated than others, shadows, etc). Atmospheric correction of Landsat thermal band has been carried out by means of a single-channel algorithm improvement developed by Cristóbal et al. (2009) and the land surface emissivity computed by means of the methodology proposed by Sobrino and Raissouni (2000). Rn has been estimated through the

  18. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  19. An algorithm for the estimation of road traffic space mean speeds from double loop detector data

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Diaz, M.; Perez Perez, I.

    2016-07-01

    Most algorithms trying to analyze or forecast road traffic rely on many inputs, but in practice, calculations are usually limited by the available data and measurement equipment. Generally, some of these inputs are substituted by raw or even inappropriate estimations, which in some cases come into conflict with the fundamentals of traffic flow theory. This paper refers to one common example of these bad practices. Many traffic management centres depend on the data provided by double loop detectors, which supply, among others, vehicle speeds. The common data treatment is to compute the arithmetic mean of these speeds over different aggregation periods (i.e. the time mean speeds). Time mean speed is not consistent with Edie’s generalized definitions of traffic variables, and therefore it is not the average speed which relates flow to density. This means that current practice begins with an error that can have negative effects in later studies and applications. The algorithm introduced in this paper enables easily the estimation of space mean speeds from the data provided by the loops. It is based on two key hypotheses: stationarity of traffic and log-normal distribution of the individual speeds in each time interval of aggregation. It could also be used in case of transient traffic as a part of any data fusion methodology. (Author)

  20. A comparison of prevalence estimates for selected health indicators and chronic diseases or conditions from the Behavioral Risk Factor Surveillance System, the National Health Interview Survey, and the National Health and Nutrition Examination Survey, 2007-2008.

    Science.gov (United States)

    Li, Chaoyang; Balluz, Lina S; Ford, Earl S; Okoro, Catherine A; Zhao, Guixiang; Pierannunzi, Carol

    2012-06-01

    To compare the prevalence estimates of selected health indicators and chronic diseases or conditions among three national health surveys in the United States. Data from adults aged 18 years or older who participated in the Behavioral Risk Factor Surveillance System (BRFSS) in 2007 and 2008 (n=807,524), the National Health Interview Survey (NHIS) in 2007 and 2008 (n=44,262), and the National Health and Nutrition Examination Survey (NHANES) during 2007 and 2008 (n=5871) were analyzed. The prevalence estimates of current smoking, obesity, hypertension, and no health insurance were similar across the three surveys, with absolute differences ranging from 0.7% to 3.9% (relative differences: 2.3% to 20.2%). The prevalence estimate of poor or fair health from BRFSS was similar to that from NHANES, but higher than that from NHIS. The prevalence estimates of diabetes, coronary heart disease, and stroke were similar across the three surveys, with absolute differences ranging from 0.0% to 0.8% (relative differences: 0.2% to 17.1%). While the BRFSS continues to provide invaluable health information at state and local level, it is reassuring to observe consistency in the prevalence estimates of key health indicators of similar caliber between BRFSS and other national surveys. Published by Elsevier Inc.

  1. Statistical estimates of parameters of the theory of the Universe structure formation

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1989-01-01

    The mean distance between superclusters is estimated by the analysis of galaxies deep surveys. The mean distance between pancakes and mean density of maxima of smaller eigenvalue of the deformation tensor are numerically calculated. In the framework of the unstable dark matter (UDM) model the unstable particle mass is estimated by confrontation of observational and calculated data. The rate of pancakes birth is calculated in the UDM model. It is shown that for the situation with nonrelativistic products of decay, the pancakes formation begins at the red shift z=5-6

  2. Estimating the Impact of Means-tested Subsidies under Treatment Externalities with Application to Anti-Malarial Bednets

    DEFF Research Database (Denmark)

    Bhattacharya, Debopam; Dupas, Pascaline; Kanaya, Shin

    and its neighbors. Using experimental data from Kenya where subsidies were randomized, coupled with GPS-based location information, we show how to estimate aggregate ITN use resulting from means-tested subsidies in the presence of such spatial spillovers. Accounting for spillovers introduces infinite......-dimensional estimated regressors corresponding to continuously distributed location coordinates and makes the inference problem novel. We show that even if individual ITN use unambiguously increases with increasing incidence of subsidy in the neighborhood, ignoring spillovers may over- or under-predict overall ITN use...... resulting from a specific targeting rule, depending on the resulting aggregate incidence of subsidy. Applying our method to the Kenyan data, we find that (i) individual ITN use rises with neighborhood subsidy-rates, (ii) under means-testing, predicted ITN use is a convex increasing function of the subsidy...

  3. Approximate median regression for complex survey data with skewed response.

    Science.gov (United States)

    Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi

    2016-12-01

    The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.

  4. Estimating mean long-term hydrologic budget components for watersheds and counties: An application to the commonwealth of Virginia, USA

    Science.gov (United States)

    Sanford, Ward E.; Nelms, David L.; Pope, Jason P.; Selnick, David L.

    2015-01-01

    Mean long-term hydrologic budget components, such as recharge and base flow, are often difficult to estimate because they can vary substantially in space and time. Mean long-term fluxes were calculated in this study for precipitation, surface runoff, infiltration, total evapotranspiration (ET), riparian ET, recharge, base flow (or groundwater discharge) and net total outflow using long-term estimates of mean ET and precipitation and the assumption that the relative change in storage over that 30-year period is small compared to the total ET or precipitation. Fluxes of these components were first estimated on a number of real-time-gaged watersheds across Virginia. Specific conductance was used to distinguish and separate surface runoff from base flow. Specific-conductance (SC) data were collected every 15 minutes at 75 real-time gages for approximately 18 months between March 2007 and August 2008. Precipitation was estimated for 1971-2000 using PRISM climate data. Precipitation and temperature from the PRISM data were used to develop a regression-based relation to estimate total ET. The proportion of watershed precipitation that becomes surface runoff was related to physiographic province and rock type in a runoff regression equation. A new approach to estimate riparian ET using seasonal SC data gave results consistent with those from other methods. Component flux estimates from the watersheds were transferred to flux estimates for counties and independent cities using the ET and runoff regression equations. Only 48 of the 75 watersheds yielded sufficient data, and data from these 48 were used in the final runoff regression equation. Final results for the study are presented as component flux estimates for all counties and independent cities in Virginia. The method has the potential to be applied in many other states in the U.S. or in other regions or countries of the world where climate and stream flow data are plentiful.

  5. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    Science.gov (United States)

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  6. Assessment of distribution and abundance estimates for Mariana swiftlets (Aerodramus bartschi) via examination of survey methods

    Science.gov (United States)

    Johnson, Nathan C.; Haig, Susan M.; Mosher, Stephen M.

    2018-01-01

    We described past and present distribution and abundance data to evaluate the status of the endangered Mariana Swiftlet (Aerodramus bartschi), a little-known echolocating cave swiftlet that currently inhabits 3 of 5 formerly occupied islands in the Mariana archipelago. We then evaluated the survey methods used to attain these estimates via fieldwork carried out on an introduced population of Mariana Swiftlets on the island of O'ahu, Hawaiian Islands, to derive better methods for future surveys. We estimate the range-wide population of Mariana Swiftlets to be 5,704 individuals occurring in 15 caves on Saipan, Aguiguan, and Guam in the Marianas; and 142 individuals occupying one tunnel on O'ahu. We further confirm that swiftlets have been extirpated from Rota and Tinian and have declined on Aguiguan. Swiftlets have remained relatively stable on Guam and Saipan in recent years. Our assessment of survey methods used for Mariana Swiftlets suggests overestimates depending on the technique used. We suggest the use of night vision technology and other changes to more accurately reflect their distribution, abundance, and status.

  7. Estimation of mean and median pO2 values for a composite EPR spectrum.

    Science.gov (United States)

    Ahmad, Rizwan; Vikram, Deepti S; Potter, Lee C; Kuppusamy, Periannan

    2008-06-01

    Electron paramagnetic resonance (EPR)-based oximetry is capable of quantifying oxygen content in samples. However, for a heterogeneous environment with multiple pO2 values, peak-to-peak linewidth of the composite EPR lineshape does not provide a reliable estimate of the overall pO2 in the sample. The estimate, depending on the heterogeneity, can be severely biased towards narrow components. To address this issue, we suggest a postprocessing method to recover the linewidth histogram which can be used in estimating meaningful parameters, such as the mean and median pO2 values. This information, although not as comprehensive as obtained by EPR spectral-spatial imaging, goes beyond what can be generally achieved with conventional EPR spectroscopy. Substantially shorter acquisition times, in comparison to EPR imaging, may prompt its use in clinically relevant models. For validation, simulation and EPR experiment data are presented.

  8. Estimating Classification Errors under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC)

    NARCIS (Netherlands)

    Boeschoten, Laura; Oberski, Daniel; De Waal, Ton

    2017-01-01

    Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible

  9. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali.

    Science.gov (United States)

    Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco

    2012-10-12

    Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  10. Reliability of Nationwide Prevalence Estimates of Dementia: A Critical Appraisal Based on Brazilian Surveys.

    Directory of Open Access Journals (Sweden)

    Flávio Chaimowicz

    Full Text Available The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil.We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815. Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%.The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations.

  11. A synthesis of convenience survey and other data to estimate undiagnosed HIV infection among men who have sex with men in England and Wales.

    Science.gov (United States)

    Walker, Kate; Seaman, Shaun R; De Angelis, Daniela; Presanis, Anne M; Dodds, Julie P; Johnson, Anne M; Mercey, Danielle; Gill, O Noel; Copas, Andrew J

    2011-10-01

    Hard-to-reach population subgroups are typically investigated using convenience sampling, which may give biased estimates. Combining information from such surveys, a probability survey and clinic surveillance, can potentially minimize the bias. We developed a methodology to estimate the prevalence of undiagnosed HIV infection among men who have sex with men (MSM) in England and Wales aged 16-44 years in 2003, making fuller use of the available data than earlier work. We performed a synthesis of three data sources: genitourinary medicine clinic surveillance (11 380 tests), a venue-based convenience survey including anonymous HIV testing (3702 MSM) and a general population sexual behaviour survey (134 MSM). A logistic regression model to predict undiagnosed infection was fitted to the convenience survey data and then applied to the MSMs in the population survey to estimate the prevalence of undiagnosed infection in the general MSM population. This estimate was corrected for selection biases in the convenience survey using clinic surveillance data. A sensitivity analysis addressed uncertainty in our assumptions. The estimated prevalence of undiagnosed HIV in MSM was 2.4% [95% confidence interval (95% CI 1.7-3.0%)], and between 1.6% (95% CI 1.1-2.0%) and 3.3% (95% CI 2.4-4.1%) depending on assumptions; corresponding to 5500 (3390-7180), 3610 (2180-4740) and 7570 (4790-9840) men, and undiagnosed fractions of 33, 24 and 40%, respectively. Our estimates are consistent with earlier work that did not make full use of data sources. Reconciling data from multiple sources, including probability-, clinic- and venue-based convenience samples can reduce bias in estimates. This methodology could be applied in other settings to take full advantage of multiple imperfect data sources.

  12. More recent robust methods for the estimation of mean and standard deviation of data

    International Nuclear Information System (INIS)

    Kanisch, G.

    2003-01-01

    Outliers in a data set result in biased values of mean and standard deviation. One way to improve the estimation of a mean is to apply tests to identify outliers and to exclude them from the calculations. Tests according to Grubbs or to Dixon, which are frequently used in practice, especially within laboratory intercomparisons, are not very efficient in identifying outliers. Since more than ten years now so-called robust methods are used more and more, which determine mean and standard deviation by iteration and down-weighting values far from the mean, thereby diminishing the impact of outliers. In 1989 the Analytical Methods Committee of the British Royal Chemical Society published such a robust method. Since 1993 the US Environmental Protection Agency published a more efficient and quite versatile method. Mean and standard deviation are calculated by iteration and application of a special weight function for down-weighting outlier candidates. In 2000, W. Cofino et al. published a very efficient robust method which works quite different from the others. It applies methods taken from the basics of quantum mechanics, such as ''wave functions'' associated with each laboratory mean value and matrix algebra (solving eigenvalue problems). In contrast to the other ones, this method includes the individual measurement uncertainties. (orig.)

  13. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    International Nuclear Information System (INIS)

    Sorini, D.

    2017-01-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ''light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. An analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ∼ 0.80 h Mpc −1 and within 10% up to k ∼ 0.94 h Mpc −1 , well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.

  14. Estimating Power Outage Cost based on a Survey for Industrial Customers

    Science.gov (United States)

    Yoshida, Yoshikuni; Matsuhashi, Ryuji

    A survey was conducted on power outage cost for industrial customers. 5139 factories, which are designated energy management factories in Japan, answered their power consumption and the loss of production value due to the power outage in an hour in summer weekday. The median of unit cost of power outage of whole sectors is estimated as 672 yen/kWh. The sector of services for amusement and hobbies and the sector of manufacture of information and communication electronics equipment relatively have higher unit cost of power outage. Direct damage cost from power outage in whole sectors reaches 77 billion yen. Then utilizing input-output analysis, we estimated indirect damage cost that is caused by the repercussion of production halt. Indirect damage cost in whole sectors reaches 91 billion yen. The sector of wholesale and retail trade has the largest direct damage cost. The sector of manufacture of transportation equipment has the largest indirect damage cost.

  15. Improving Standard Poststratification Techniques For Random-Digit-Dialing Telephone Surveys

    Directory of Open Access Journals (Sweden)

    Michael P. Battaglia

    2008-03-01

    Full Text Available Random-digit-dialing surveys in the United States such as the Behavioral Risk Factor Surveillance System (BRFSS typically poststratify on age, gender and race/ethnicity using control totals from an appropriate source such as the 2000 Census, the Current Population Survey, or the American Community Survey. Using logistic regression and interaction detection software we identified key "main effect" socio-demographic variables and important two-factor interactions associated with several health risk factor outcomes measured in the BRFSS, one of the largest annual RDD surveys in the United States. A procedure was developed to construct control totals, which were consistent with estimates of age, gender, and race/ethnicity obtained from a commercial source and distributions of other demographic variables from the Current Population Survey. Raking was used to incorporate main effects and two-factor interaction margins into the weighting of the BRFSS survey data. The resulting risk factor estimates were then compared with those based on the current BRFSS weighting methodology and mean squared error estimates were developed. The research demonstrates that by identifying socio-demographic variables associated with key outcome variables and including these variables in the weighting methodology, nonresponse bias can be substantially reduced.

  16. Basin Visual Estimation Technique (BVET) and Representative Reach Approaches to Wadeable Stream Surveys: Methodological Limitations and Future Directions

    Science.gov (United States)

    Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor

    2004-01-01

    Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...

  17. Can i just check...? Effects of edit check questions on measurement error and survey estimates

    NARCIS (Netherlands)

    Lugtig, Peter; Jäckle, Annette

    2014-01-01

    Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to

  18. Improving estimates of numbers of children with severe acute malnutrition using cohort and survey data

    DEFF Research Database (Denmark)

    Isanaka, Sheila; Boundy, Ellen O neal; Grais, Rebecca F

    2016-01-01

    Severe acute malnutrition (SAM) is reported to affect 19 million children worldwide. However, this estimate is based on prevalence data from cross-sectional surveys and can be expected to miss some children affected by an acute condition such as SAM. The burden of acute conditions is more...

  19. AUTOMATED UNSUPERVISED CLASSIFICATION OF THE SLOAN DIGITAL SKY SURVEY STELLAR SPECTRA USING k-MEANS CLUSTERING

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Almeida, J.; Allende Prieto, C., E-mail: jos@iac.es, E-mail: callende@iac.es [Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife (Spain)

    2013-01-20

    Large spectroscopic surveys require automated methods of analysis. This paper explores the use of k-means clustering as a tool for automated unsupervised classification of massive stellar spectral catalogs. The classification criteria are defined by the data and the algorithm, with no prior physical framework. We work with a representative set of stellar spectra associated with the Sloan Digital Sky Survey (SDSS) SEGUE and SEGUE-2 programs, which consists of 173,390 spectra from 3800 to 9200 A sampled on 3849 wavelengths. We classify the original spectra as well as the spectra with the continuum removed. The second set only contains spectral lines, and it is less dependent on uncertainties of the flux calibration. The classification of the spectra with continuum renders 16 major classes. Roughly speaking, stars are split according to their colors, with enough finesse to distinguish dwarfs from giants of the same effective temperature, but with difficulties to separate stars with different metallicities. There are classes corresponding to particular MK types, intrinsically blue stars, dust-reddened, stellar systems, and also classes collecting faulty spectra. Overall, there is no one-to-one correspondence between the classes we derive and the MK types. The classification of spectra without continuum renders 13 classes, the color separation is not so sharp, but it distinguishes stars of the same effective temperature and different metallicities. Some classes thus obtained present a fairly small range of physical parameters (200 K in effective temperature, 0.25 dex in surface gravity, and 0.35 dex in metallicity), so that the classification can be used to estimate the main physical parameters of some stars at a minimum computational cost. We also analyze the outliers of the classification. Most of them turn out to be failures of the reduction pipeline, but there are also high redshift QSOs, multiple stellar systems, dust-reddened stars, galaxies, and, finally, odd

  20. A method to combine non-probability sample data with probability sample data in estimating spatial means of environmental variables

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    2003-01-01

    In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be

  1. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    Directory of Open Access Journals (Sweden)

    Minetti Andrea

    2012-10-01

    Full Text Available Abstract Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i health areas not requiring supplemental activities; ii health areas requiring additional vaccination; iii health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3, standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15 are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  2. Errors of Mean Dynamic Topography and Geostrophic Current Estimates in China's Marginal Seas from GOCE and Satellite Altimetry

    DEFF Research Database (Denmark)

    Jin, Shuanggen; Feng, Guiping; Andersen, Ole Baltazar

    2014-01-01

    and geostrophic current estimates from satellite gravimetry and altimetry are investigated and evaluated in China's marginal seas. The cumulative error in MDT from GOCE is reduced from 22.75 to 9.89 cm when compared to the Gravity Recovery and Climate Experiment (GRACE) gravity field model ITG-Grace2010 results......The Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) and satellite altimetry can provide very detailed and accurate estimates of the mean dynamic topography (MDT) and geostrophic currents in China's marginal seas, such as, the newest high-resolution GOCE gravity field model GO......-CONS-GCF-2-TIM-R4 and the new Centre National d'Etudes Spatiales mean sea surface model MSS_CNES_CLS_11 from satellite altimetry. However, errors and uncertainties of MDT and geostrophic current estimates from satellite observations are not generally quantified. In this paper, errors and uncertainties of MDT...

  3. Validation of the Maslach Burnout Inventory-Human Services Survey for Estimating Burnout in Dental Students.

    Science.gov (United States)

    Montiel-Company, José María; Subirats-Roig, Cristian; Flores-Martí, Pau; Bellot-Arcís, Carlos; Almerich-Silla, José Manuel

    2016-11-01

    The aim of this study was to examine the validity and reliability of the Maslach Burnout Inventory-Human Services Survey (MBI-HSS) as a tool for assessing the prevalence and level of burnout in dental students in Spanish universities. The survey was adapted from English to Spanish. A sample of 533 dental students from 15 Spanish universities and a control group of 188 medical students self-administered the survey online, using the Google Drive service. The test-retest reliability or reproducibility showed an Intraclass Correlation Coefficient of 0.95. The internal consistency of the survey was 0.922. Testing the construct validity showed two components with an eigenvalue greater than 1.5, which explained 51.2% of the total variance. Factor I (36.6% of the variance) comprised the items that estimated emotional exhaustion and depersonalization. Factor II (14.6% of the variance) contained the items that estimated personal accomplishment. The cut-off point for the existence of burnout achieved a sensitivity of 92.2%, a specificity of 92.1%, and an area under the curve of 0.96. Comparison of the total dental students sample and the control group of medical students showed significantly higher burnout levels for the dental students (50.3% vs. 40.4%). In this study, the MBI-HSS was found to be viable, valid, and reliable for measuring burnout in dental students. Since the study also found that the dental students suffered from high levels of this syndrome, these results suggest the need for preventive burnout control programs.

  4. Empirical Estimates in Optimization Problems: Survey with Special Regard to Heavy Tails and Dependent Data

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta

    2012-01-01

    Roč. 19, č. 30 (2012), s. 92-111 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150; GA ČR GAP402/10/1610 Institutional support: RVO:67985556 Keywords : Stochastic optimization * empirical estimates * thin and heavy tails * independent and weak dependent random samples Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/kankova-empirical estimates in optimization problems survey with special regard to heavy tails and dependent data.pdf

  5. Density surface fitting to estimate the abundance of humpback whales based on the NASS-95 and NASS- 2001 aerial and shipboard surveys

    Directory of Open Access Journals (Sweden)

    Charles GM Paxton

    2009-09-01

    The estimated humpback whale abundance for the region covered by the aerial and shipboard surveys in 1995 was 10,521 (95% CI: 3,716–24,636 using all available data and 7,625 (3,641–22,424 if survey blocks with 0 sightings around the Faroes and south of 60˚ N where no humpback whales were detected were excluded from the analysis. The estimate for the total survey region in 2001 was 14,662 (9,441–29,879. The high upper bounds of the confidence intervals were thought to be caused by a paucity of effort over wide areas of the survey leading to interpolation. Overall, the uncertainty associated with these abundance estimates was approximately equal to, or greater than, that associated with a stratified distance analysis. Given these wide CIs the evidence for a substantial difference in abundance between years was equivocal. However there was evidence to suggest that humpback whales congregated in shallower waters between 6–8˚C.

  6. Bayes allocation of the sample for estimation of the mean when each stratum has a Poisson distribution

    International Nuclear Information System (INIS)

    Wright, T.

    1983-01-01

    Consider a stratified population with L strata, so that a Poisson random variable is associated with each stratum. The parameter associated with the hth stratum is theta/sub h/, h = 1, 2, ..., L. Let ω/sub h/ be the known proportion of the population in the hth stratum, h = 1, 2, ..., L. The authors want to estimate the parameter theta = summation from h = 1 to L ω/sub h/theta/sub h/. We assume that prior information is available on theta/sub h/ and that it can be expressed in terms of a gamma distribution with parameters α/sub h/ and β/sub h/, h = 1, 2, ..., L. We also assume that the prior distributions are independent. Using squared error loss function, a Bayes allocation of total sample size with a cost constraint is given. The Bayes estimate using the Bayes allocation is shown to have an adjusted mean square error which is strictly less than the adjusted mean square error of the classical estimate using the classical allocation

  7. Percentile estimation using the normal and lognormal probability distribution

    International Nuclear Information System (INIS)

    Bement, T.R.

    1980-01-01

    Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution

  8. Estimating Single and Multiple Target Locations Using K-Means Clustering with Radio Tomographic Imaging in Wireless Sensor Networks

    Science.gov (United States)

    2015-03-26

    clustering is an algorithm that has been used in data mining applications such as machine learning applications , pattern recognition, hyper-spectral imagery...42 3.7.2 Application of K-means Clustering . . . . . . . . . . . . . . . . . 42 3.8 Experiment Design...Tomographic Imaging WLAN Wireless Local Area Networks WSN Wireless Sensor Network xx ESTIMATING SINGLE AND MULTIPLE TARGET LOCATIONS USING K-MEANS CLUSTERING

  9. Estimating consumer willingness to pay a price premium for Alaska secondary wood products.

    Science.gov (United States)

    Geoffrey H. Donovan; David L. Nicholls

    2003-01-01

    Dichotomous choice contingent valuation survey techniques were used to estimate mean willingness to pay (WTP) a price premium for made-in-Alaska secondary wood products. Respondents were asked to compare two superficially identical end tables, one made in China and one made in Alaska. The surveys were administered at home shows in Anchorage, Fairbanks, and Sitka in...

  10. Estimate of the global-scale joule heating rates in the thermosphere due to time mean currents

    International Nuclear Information System (INIS)

    Roble, R.G.; Matsushita, S.

    1975-01-01

    An estimate of the global-scale joule heating rates in the thermosphere is made based on derived global equivalent overhead electric current systems in the dynamo region during geomagnetically quiet and disturbed periods. The equivalent total electric field distribution is calculated from Ohm's law. The global-scale joule heating rates are calculated for various monthly average periods in 1965. The calculated joule heating rates maximize at high latitudes in the early evening and postmidnight sectors. During geomagnetically quiet times the daytime joule heating rates are considerably lower than heating by solar EUV radiation. However, during geomagnetically disturbed periods the estimated joule heating rates increase by an order of magnitude and can locally exceed the solar EUV heating rates. The results show that joule heating is an important and at times the dominant energy source at high latitudes. However, the global mean joule heating rates calculated near solar minimum are generally small compared to the global mean solar EUV heating rates. (auth)

  11. [Estimating child mortality using the previous child technique, with data from health centers and household surveys: methodological aspects].

    Science.gov (United States)

    Aguirre, A; Hill, A G

    1988-01-01

    2 trials of the previous child or preceding birth technique in Bamako, Mali, and Lima, Peru, gave very promising results for measurement of infant and early child mortality using data on survivorship of the 2 most recent births. In the Peruvian study, another technique was tested in which each woman was asked about her last 3 births. The preceding birth technique described by Brass and Macrae has rapidly been adopted as a simple means of estimating recent trends in early childhood mortality. The questions formulated and the analysis of results are direct when the mothers are visited at the time of birth or soon after. Several technical aspects of the method believed to introduce unforeseen biases have now been studied and found to be relatively unimportant. But the problems arising when the data come from a nonrepresentative fraction of the total fertile-aged population have not been resolved. The analysis based on data from 5 maternity centers including 1 hospital in Bamako, Mali, indicated some practical problems and the information obtained showed the kinds of subtle biases that can result from the effects of selection. The study in Lima tested 2 abbreviated methods for obtaining recent early childhood mortality estimates in countries with deficient vital registration. The basic idea was that a few simple questions added to household surveys on immunization or diarrheal disease control for example could produce improved child mortality estimates. The mortality estimates in Peru were based on 2 distinct sources of information in the questionnaire. All women were asked their total number of live born children and the number still alive at the time of the interview. The proportion of deaths was converted into a measure of child survival using a life table. Then each woman was asked for a brief history of the 3 most recent live births. Dates of birth and death were noted in month and year of occurrence. The interviews took only slightly longer than the basic survey

  12. Estimating abundance of the Southern Hudson Bay polar bear subpopulation using aerial surveys, 2011 and 2012

    Science.gov (United States)

    Obbard, Martyn E.; Middel, Kevin R.; Stapleton, Seth P.; Thibault, Isabelle; Brodeur, Vincent; Jutras, Charles

    2013-01-01

    The Southern Hudson Bay (SH) polar bear subpopulation occurs at the southern extent of the species’ range. Although capture-recapture studies indicate that abundance remained stable between 1986 and 2005, declines in body condition and survival were documented during the period, possibly foreshadowing a future decrease in abundance. To obtain a current estimate of abundance, we conducted a comprehensive line transect aerial survey of SH during 2011–2012. We stratified the study site by anticipated densities and flew coastal contour transects and systematically spaced inland transects in Ontario and on Akimiski Island and large offshore islands in 2011. Data were collected with double observer and distance sampling protocols. We also surveyed small islands in Hudson Bay and James Bay and flew a comprehensive transect along the Québec coastline in 2012. We observed 667 bears in Ontario and on Akimiski Island and nearby islands in 2011, and we sighted 80 bears on offshore islands during 2012. Mark-recapture distance sampling and sightresight models yielded a model-averaged estimate of 868 (SE: 177) for the 2011 study area. Our estimate of abundance for the entire SH subpopulation (951; SE: 177) suggests that abundance has remained unchanged. However, this result should be interpreted cautiously because of the methodological differences between historical studies (physical capture) and this survey. A conservative management approach is warranted given the previous increases in the duration of the ice-free season, which are predicted to continue in the future, and previously documented declines in body condition and vital rates.

  13. Doses from Hiroshima mass radiologic gastric surveys

    Energy Technology Data Exchange (ETDEWEB)

    Antoku, S; Sawada, S; Russell, W J [Radiation Effects Research Foundation, Hiroshima (Japan)

    1980-05-01

    Doses to examinees from mass radiologic surveys of the stomach in Hiroshima Perfecture were estimated by surveying for the frequency of the examinations, and for the technical factors used in them, and by phantom dosimetry. The average surface, active bone marrow and male and female gonad doses per examination were 5.73 rad, 231 mrad, and 20.6 and 140 mrad, respectively. These data will be used in estimating doses from medical X-rays among atomic bomb survivors. By applying them to the Hiroshima population, the genetically significant, per caput mean marrow, and leukemia significant doses were 0.14,8.6 and 7.4 mrad, respectively. There was a benefit-to risk ratio of about 50 for mass gastric surveys performed in 1976. However, the calculated risk was greater than the benefit for examinees under 29 years of age because of the lower incidence of gastric cancer in those under 29 years.

  14. Estimating factors influencing the detection probability of semiaquatic freshwater snails using quadrat survey methods

    Science.gov (United States)

    Roesler, Elizabeth L.; Grabowski, Timothy B.

    2018-01-01

    Developing effective monitoring methods for elusive, rare, or patchily distributed species requires extra considerations, such as imperfect detection. Although detection is frequently modeled, the opportunity to assess it empirically is rare, particularly for imperiled species. We used Pecos assiminea (Assiminea pecos), an endangered semiaquatic snail, as a case study to test detection and accuracy issues surrounding quadrat searches. Quadrats (9 × 20 cm; n = 12) were placed in suitable Pecos assiminea habitat and randomly assigned a treatment, defined as the number of empty snail shells (0, 3, 6, or 9). Ten observers rotated through each quadrat, conducting 5-min visual searches for shells. The probability of detecting a shell when present was 67.4 ± 3.0%, but it decreased with the increasing litter depth and fewer number of shells present. The mean (± SE) observer accuracy was 25.5 ± 4.3%. Accuracy was positively correlated to the number of shells in the quadrat and negatively correlated to the number of times a quadrat was searched. The results indicate quadrat surveys likely underrepresent true abundance, but accurately determine the presence or absence. Understanding detection and accuracy of elusive, rare, or imperiled species improves density estimates and aids in monitoring and conservation efforts.

  15. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael

    2013-01-01

    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  16. Estimating relative demand for wildlife: Conservation activity indicators

    Science.gov (United States)

    Gray, Gary G.; Larson, Joseph S.

    1982-09-01

    An alternative method of estimating relative demand among nonconsumptive uses of wildlife and among wildlife species is proposed. A demand intensity score (DIS), derived from the relative extent of an individual's involvement in outdoor recreation and conservation activities, is used as a weighting device to adjust the importance of preference rankings for wildlife uses and wildlife species relative to other members of a survey population. These adjusted preference rankings were considered to reflect relative demand levels (RDLs) for wildlife uses and for species by the survey population. This technique may be useful where it is not possible or desirable to estimate demand using traditional economic means. In one of the findings from a survey of municipal conservation commission members in Massachusetts, presented as an illustration of this methodology, poisonous snakes were ranked third in preference among five groups of reptiles. The relative demand level for poisonous snakes, however, was last among the five groups.

  17. State-of-charge inconsistency estimation of lithium-ion battery pack using mean-difference model and extended Kalman filter

    Science.gov (United States)

    Zheng, Yuejiu; Gao, Wenkai; Ouyang, Minggao; Lu, Languang; Zhou, Long; Han, Xuebing

    2018-04-01

    State-of-charge (SOC) inconsistency impacts the power, durability and safety of the battery pack. Therefore, it is necessary to measure the SOC inconsistency of the battery pack with good accuracy. We explore a novel method for modeling and estimating the SOC inconsistency of lithium-ion (Li-ion) battery pack with low computation effort. In this method, a second-order RC model is selected as the cell mean model (CMM) to represent the overall performance of the battery pack. A hypothetical Rint model is employed as the cell difference model (CDM) to evaluate the SOC difference. The parameters of mean-difference model (MDM) are identified with particle swarm optimization (PSO). Subsequently, the mean SOC and the cell SOC differences are estimated by using extended Kalman filter (EKF). Finally, we conduct an experiment on a small Li-ion battery pack with twelve cells connected in series. The results show that the evaluated SOC difference is capable of tracking the changing of actual value after a quick convergence.

  18. Estimating usual intakes mainly affects the micronutrient distribution among infants, toddlers and pre-schoolers from the 2012 Mexican National Health and Nutrition Survey.

    Science.gov (United States)

    Piernas, Carmen; Miles, Donna R; Deming, Denise M; Reidy, Kathleen C; Popkin, Barry M

    2016-04-01

    To compare estimates from one day with usual intake estimates to evaluate how the adjustment for within-person variability affected nutrient intake and adequacy in Mexican children. In order to obtain usual nutrient intakes, the National Cancer Institute's method was used to correct the first 24 h dietary recall collected in the entire sample (n 2045) with a second 24 h recall collected in a sub-sample (n 178). We computed estimates of one-day and usual intakes of total energy, fat, Fe, Zn and Na. 2012 Mexican National Health and Nutrition Survey. A total of 2045 children were included: 0-5·9 months old (n 182), 6-11·9 months old (n 228), 12-23·9 months old (n 537) and 24-47·9 months old (n 1098). From these, 178 provided an additional dietary recall. Although we found small or no differences in energy intake (kJ/d and kcal/d) between one-day v. usual intake means, the prevalence of inadequate and excessive energy intake decreased somewhat when using measures of usual intake relative to one day. Mean fat intake (g/d) was not different between one-day and usual intake among children >6 months old, but the prevalence of inadequate and excessive fat intake was overestimated among toddlers and pre-schoolers when using one-day intake (P6 months. There was overall low variability in energy and fat intakes but higher for micronutrients. Because the usual intake distributions are narrower, the prevalence of inadequate/excessive intakes may be biased when estimating nutrient adequacy if one day of data is used.

  19. Intelligence in Bali--A Case Study on Estimating Mean IQ for a Population Using Various Corrections Based on Theory and Empirical Findings

    Science.gov (United States)

    Rindermann, Heiner; te Nijenhuis, Jan

    2012-01-01

    A high-quality estimate of the mean IQ of a country requires giving a well-validated test to a nationally representative sample, which usually is not feasible in developing countries. So, we used a convenience sample and four corrections based on theory and empirical findings to arrive at a good-quality estimate of the mean IQ in Bali. Our study…

  20. Toward accurate and precise estimates of lion density.

    Science.gov (United States)

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  1. Contributions to the genetic and mean bone-marrow doses of the Australian population from radiological procedures

    International Nuclear Information System (INIS)

    Swindon, T.N.; Morris, N.D.

    1980-06-01

    The results of a national survey of radiological procedures used for diagnosis and therapy in medicine, dentistry and chiropracty are reviewed. Statistical data for the distribution and frequency of various procedures in Australian hospitals and practices are summarised, together with their associated radiation doses. Annual genetically significant and mean bone-marrow doses to the Australian population arising from these procedures are derived for the survey year of 1970. Values of 176 microgray and 651 microgray for the annual (per capita) genetic and mean bone-marrow doses respectively are reported. These compare closely with corresponding estimates in other countries with similar medical practices to those in Australia

  2. Estimation of Genetic Effects from Generation Means in Maize (Zea mays L.)

    International Nuclear Information System (INIS)

    Ligeyo, D.O.; Ayiecho, P.O.

    1999-01-01

    Estimates of mean, additive, dominance, additive * additive, additive * dominance and dominance * dominance genetic effects were obtained for six crosses from four inbred lines of maize for grain yield. All the genetic effects contributed to the inheritance of yield. However not all genetic effects are present in all crosses at all locations. Both additive dominance genetic effects were responsible for the manifestation variability in grain yield, though the dominance genetic effect was preponderant in all cases. In most cases additive * additive and additive * dominance effects were more important contributors to inheritance than dominance * dominance gene effects at all locations.In all cases the manifestation of various genetic effects varied according to crosses and experimental sites

  3. Using interview-based recall surveys to estimate cod Gadus morhua and eel Anguilla anguilla harvest in Danish recreational fishing

    DEFF Research Database (Denmark)

    Sparrevohn, Claus Reedtz; Storr-Paulsen, Marie

    2012-01-01

    Using interview-based recall surveys to estimate cod Gadus morhua and eel Anguilla anguilla harvest in Danish recreational fishing. – ICES Journal of Marine Science, 69: 323–330.Marine recreational fishing is a popular outdoor activity in Denmark, practised by both anglers and passive gear fishers....... However, the impact on the targeted stocks is unknown, so to estimate the 2009 harvest of cod Gadus morhua and eel Anguilla anguilla, two separate interview-based surveys were initiated and carried out in 2009/2010. The first recall survey exclusively targeted fishers who had been issued......, in certain areas, the recreational harvest of cod accounted for more than 30% of the total yield. The majority (81%) of the recreational cod harvest was taken by anglers. Eels, however, are almost exclusively caught with passive gear (fykenets) and a total of 104 t year−1 was harvested, which corresponds...

  4. Coded-Wire Tag Expansion Factors for Chinook Salmon Carcass Surveys in California: Estimating the Numbers and Proportions of Hatchery-Origin Fish

    Directory of Open Access Journals (Sweden)

    Michael S. Mohr

    2013-12-01

    Full Text Available Recovery of fish with adipose fin clips (adc and coded-wire tags (cwt in escapement surveys allows calculation of expansion factors used in estimation of the total number of fish from each adc,cwt release group, allowing escapement to be resolved by age and stock of origin. Expanded recoveries are used to derive important estimates such as the total number and proportion of hatchery-origin fish present. The standard estimation scheme assumes accurate visual classification of adc status, which can be problematic for decomposing carcasses. Failure to account for this potential misclassification can lead to significant estimation bias. We reviewed sample expansion factors used for the California Central Valley Chinook salmon 2010 carcass surveys in this context. For upper Sacramento River fall-run and late fall-run carcass surveys, the estimated proportions of adc,cwt fish for fresh and non-fresh carcasses differed substantially, likely from the under-recognition of adc fish in non-fresh carcasses. The resulting estimated proportions of hatchery-origin fish in the upper Sacramento River fall-run and late fall-run carcass surveys were 2.33 to 2.89 times higher if only fresh carcasses are considered. Similar biases can be avoided by consideration of only fresh carcasses for which determination of adc status is relatively straightforward; however, restricting the analysis entirely to fresh carcasses may limit precision because of reduced sample size, and is only possible if protocols for sampling and recording data ensure that the sample data and results for fresh carcasses can be extracted. Thus we recommend sampling protocols that are clearly documented and separately track fresh versus non-fresh carcasses, either collecting only definitively adc fish or that carefully track non-fresh carcasses that are definitively adc versus those that are possibly adc. This would allow judicious use of non-fresh carcass data when sample sizes are otherwise

  5. An estimate of the veteran population in England: based on data from the 2007 Adult Psychiatric Morbidity Survey.

    Science.gov (United States)

    Woodhead, Charlotte; Sloggett, Andy; Bray, Issy; Bradbury, Jason; McManus, Sally; Meltzer, Howard; Brugha, Terry; Jenkins, Rachel; Greenberg, Neil; Wessely, Simon; Fear, Nicola

    2009-01-01

    The health and well-being of military veterans has recently generated much media and political interest. Estimating the current and future size of the veteran population is important to the planning and allocation of veteran support services. Data from a 2007 nationally representative residential survey of England (the Adult Psychiatric Morbidity Survey) were extrapolated to the whole population to estimate the number of veterans currently residing in private households in England. This population was projected forward in two ten-year blocks up to 2027 using a current life table. It was estimated that in 2007, 3,771,534 (95% CI: 2,986,315-4,910,205) veterans were living in residential households in England. By 2027, this figure was predicted to decline by 50.4 per cent, mainly due to large reductions in the number of veterans in the older age groups (65-74 and 75+ years). Approximately three to five million veterans are currently estimated to be living in the community in England. As the proportion of National Service veterans reduces with time, the veteran population is expected to halve over the next 20 years.

  6. Mean consumption, poverty and inequality in rural India in the 60th round of the National Sample Survey.

    Science.gov (United States)

    Jha, Raghbendra; Gaiha, Raghav; Sharma, Anurag

    2010-01-01

    This article reports on mean consumption, poverty (all three FGT measures) and inequality during 2004 for rural India using National Sample Survey (NSS) data for the 60th Round. Mean consumption at the national level is much higher than the poverty line. However, the Gini coefficient is higher than in recent earlier rounds. The headcount ratio is 22.9 per cent. Mean consumption, all three measures of poverty and the Gini coefficient are computed at the level of 20 states and 63 agro-climatic zones in these 20 states. It is surmised that despite impressive growth rates deprivation is pervasive, pockets of severe poverty persist, and inequality is rampant.

  7. The mean active bone marrow dose to the adult population of the United States from diagnostic radiology

    International Nuclear Information System (INIS)

    Shleien, B.; Tucker, T.T.; Johnson, D.W.

    1977-01-01

    Estimates, based on an empirical model and computer program, have been calculated and are presented on the mean active bone marrow dose to adults from diagnostic radiography, fluoroscopy, and dental radiography as practiced in the United States in 1970. The annual per capita mean active bone marrow dose in 1970 to adults from the above practices is estimated to be 103 mrad; 77 percent, 20 percent, and 3 percent from radiographic, fluoroscopic and dental examinations respectively. Examinations of the upper and lower abdomen contribute approximately 39 percent each to the total mean active bone marrow dose for adults; those of the pelvis 4 percent; the thorax 12 percent; and head and neck examinations (including dental) contribute about 6 percent. The per capita mean active bone marrow dose for various age groups is discussed. Contributions to the dose within a given age group from different examinations indicate that in the 15-34 year old age group Lumbar and Lumbosacral Spine examinations contribute most to the mean active bone marrow dose. Thereafter Upper G I Series and Barium Enemas are the highest contributors. Comparisons are made with results of the 1964 U.S. X-ray survey and similar surveys from other nations

  8. Estimation of breeding values for mean and dispersion, their variance and correlation using double hierarchical generalized linear models.

    Science.gov (United States)

    Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L

    2012-12-01

    The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).

  9. Simple algorithm to estimate mean-field effects from minor differential permeability curves based on the Preisach model

    International Nuclear Information System (INIS)

    Perevertov, Oleksiy

    2003-01-01

    The classical Preisach model (PM) of magnetic hysteresis requires that any minor differential permeability curve lies under minor curves with larger field amplitude. Measurements of ferromagnetic materials show that very often this is not true. By applying the classical PM formalism to measured minor curves one can discover that it leads to an oval-shaped region on each half of the Preisach plane where the calculations produce negative values in the Preisach function. Introducing an effective field, which differs from the applied one by a mean-field term proportional to the magnetization, usually solves this problem. Complex techniques exist to estimate the minimum necessary proportionality constant (the moving parameter). In this paper we propose a simpler way to estimate the mean-field effects for use in nondestructive testing, which is based on experience from the measurements of industrial steels. A new parameter (parameter of shift) is introduced, which monitors the mean-field effects. The relation between the shift parameter and the moving one was studied for a number of steels. From preliminary experiments no correlation was found between the shift parameter and the classical magnetic ones such as the coercive field, maximum differential permeability and remanent magnetization

  10. A citizen science based survey method for estimating the density of urban carnivores

    Science.gov (United States)

    Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on

  11. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range

    OpenAIRE

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-01-01

    Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...

  12. Importance of Survey Design for Studying the Epidemiology of Emerging Tobacco Product Use Among Youth.

    Science.gov (United States)

    Delnevo, Cristine D; Gundersen, Daniel A; Manderski, Michelle T B; Giovenco, Daniel P; Giovino, Gary A

    2017-08-15

    Accurate surveillance is critical for monitoring the epidemiology of emerging tobacco products in the United States, and survey science suggests that survey response format can impact prevalence estimates. We utilized data from the 2014 New Jersey Youth Tobacco Survey (n = 3,909) to compare estimates of the prevalence of 4 behaviors (ever hookah use, current hookah use, ever e-cigarette use, and current e-cigarette use) among New Jersey high school students, as assessed using "check-all-that-apply" questions, with estimates measured by means of "forced-choice" questions. Measurement discrepancies were apparent for all 4 outcomes, with the forced-choice questions yielding prevalence estimates approximately twice those of the check-all-that-apply questions, and agreement was fair to moderate. The sensitivity of the check-all-that-apply questions, treating the forced-choice format as the "gold standard," ranged from 38.1% (current hookah use) to 58.3% (ever e-cigarette use), indicating substantial false-negative rates. These findings highlight the impact of question response format on prevalence estimates of emerging tobacco products among youth and suggest that estimates generated by means of check-all-that-apply questions may be biased downward. Alternative survey designs should be considered to avoid check-all-that-apply response formats, and researchers should use caution when interpreting tobacco use data obtained from check-all-that-apply formats. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  14. Methods for estimating the occurrence of polypharmacy by means of a prescription database

    DEFF Research Database (Denmark)

    Bjerrum, L; Rosholm, J U; Hallas, J

    1997-01-01

    to equal the amount of drug purchased, as measured in defined daily doses (DDD), thereby assuming a daily intake of one DDD. PP was defined as overlapping periods of consumption for different drugs. A Venn diagram was used to illustrate and compare this estimator of PP with two other indicators of multiple......-drug use: the number of drugs purchased in 3 months and the mean number of drugs used in 1 year. A receiver operating curve (ROC) was used to evaluate the possibility of predicting episodes of PP from the number of drugs purchased in 3 months. RESULTS: The proposed estimator of PP was robust towards...... for the first time in 1994 stabilised after approximately 6 months, resulting in an incidence of major PP of 0.2% and of minor PP of 1.2% per month. For individuals exposed to PP, the median number of days of exposure was 61 and 10.5% were exposed for more than 350 days of the year. Purchase of five or more...

  15. Chronic disease prevalence from Italian administrative databases in the VALORE project: a validation through comparison of population estimates with general practice databases and national survey

    Science.gov (United States)

    2013-01-01

    Background Administrative databases are widely available and have been extensively used to provide estimates of chronic disease prevalence for the purpose of surveillance of both geographical and temporal trends. There are, however, other sources of data available, such as medical records from primary care and national surveys. In this paper we compare disease prevalence estimates obtained from these three different data sources. Methods Data from general practitioners (GP) and administrative transactions for health services were collected from five Italian regions (Veneto, Emilia Romagna, Tuscany, Marche and Sicily) belonging to all the three macroareas of the country (North, Center, South). Crude prevalence estimates were calculated by data source and region for diabetes, ischaemic heart disease, heart failure and chronic obstructive pulmonary disease (COPD). For diabetes and COPD, prevalence estimates were also obtained from a national health survey. When necessary, estimates were adjusted for completeness of data ascertainment. Results Crude prevalence estimates of diabetes in administrative databases (range: from 4.8% to 7.1%) were lower than corresponding GP (6.2%-8.5%) and survey-based estimates (5.1%-7.5%). Geographical trends were similar in the three sources and estimates based on treatment were the same, while estimates adjusted for completeness of ascertainment (6.1%-8.8%) were slightly higher. For ischaemic heart disease administrative and GP data sources were fairly consistent, with prevalence ranging from 3.7% to 4.7% and from 3.3% to 4.9%, respectively. In the case of heart failure administrative estimates were consistently higher than GPs’ estimates in all five regions, the highest difference being 1.4% vs 1.1%. For COPD the estimates from administrative data, ranging from 3.1% to 5.2%, fell into the confidence interval of the Survey estimates in four regions, but failed to detect the higher prevalence in the most Southern region (4.0% in

  16. Survey of engineering computational methods and experimental programs for estimating supersonic missile aerodynamic characteristics

    Science.gov (United States)

    Sawyer, W. C.; Allen, J. M.; Hernandez, G.; Dillenius, M. F. E.; Hemsch, M. J.

    1982-01-01

    This paper presents a survey of engineering computational methods and experimental programs used for estimating the aerodynamic characteristics of missile configurations. Emphasis is placed on those methods which are suitable for preliminary design of conventional and advanced concepts. An analysis of the technical approaches of the various methods is made in order to assess their suitability to estimate longitudinal and/or lateral-directional characteristics for different classes of missile configurations. Some comparisons between the predicted characteristics and experimental data are presented. These comparisons are made for a large variation in flow conditions and model attitude parameters. The paper also presents known experimental research programs developed for the specific purpose of validating analytical methods and extending the capability of data-base programs.

  17. The association of estimated salt intake with blood pressure in a Viet Nam national survey.

    Directory of Open Access Journals (Sweden)

    Paul N Jensen

    Full Text Available To evaluate the association of salt consumption with blood pressure in Viet Nam, a developing country with a high level of salt consumption.Analysis of a nationally representative sample of Vietnamese adults 25-65 years of age who were surveyed using the World Health Organization STEPwise approach to Surveillance protocol. Participants who reported acute illness, pregnancy, or current use of antihypertensive medications were excluded. Daily salt consumption was estimated from fasting mid-morning spot urine samples. Associations of salt consumption with systolic blood pressure and prevalent hypertension were assessed using adjusted linear and generalized linear models. Interaction terms were tested to assess differences by age, smoking, alcohol consumption, and rural/urban status.The analysis included 2,333 participants (mean age: 37 years, 46% male, 33% urban. The average estimated salt consumption was 10g/day. No associations of salt consumption with blood pressure or prevalent hypertension were observed at a national scale in men or women. The associations did not differ in subgroups defined by age, smoking, or alcohol consumption; however, associations differed between urban and rural participants (p-value for interaction of urban/rural status with salt consumption, p = 0.02, suggesting that higher salt consumption may be associated with higher systolic blood pressure in urban residents but lower systolic blood pressure in rural residents.Although there was no evidence of an association at a national level, associations of salt consumption with blood pressure differed between urban and rural residents in Viet Nam. The reasons for this differential association are not clear, and given the large rate of rural to urban migration experienced in Viet Nam, this topic warrants further investigation.

  18. The association of estimated salt intake with blood pressure in a Viet Nam national survey.

    Science.gov (United States)

    Jensen, Paul N; Bao, Tran Quoc; Huong, Tran Thi Thanh; Heckbert, Susan R; Fitzpatrick, Annette L; LoGerfo, James P; Ngoc, Truong Le Van; Mokdad, Ali H

    2018-01-01

    To evaluate the association of salt consumption with blood pressure in Viet Nam, a developing country with a high level of salt consumption. Analysis of a nationally representative sample of Vietnamese adults 25-65 years of age who were surveyed using the World Health Organization STEPwise approach to Surveillance protocol. Participants who reported acute illness, pregnancy, or current use of antihypertensive medications were excluded. Daily salt consumption was estimated from fasting mid-morning spot urine samples. Associations of salt consumption with systolic blood pressure and prevalent hypertension were assessed using adjusted linear and generalized linear models. Interaction terms were tested to assess differences by age, smoking, alcohol consumption, and rural/urban status. The analysis included 2,333 participants (mean age: 37 years, 46% male, 33% urban). The average estimated salt consumption was 10g/day. No associations of salt consumption with blood pressure or prevalent hypertension were observed at a national scale in men or women. The associations did not differ in subgroups defined by age, smoking, or alcohol consumption; however, associations differed between urban and rural participants (p-value for interaction of urban/rural status with salt consumption, p = 0.02), suggesting that higher salt consumption may be associated with higher systolic blood pressure in urban residents but lower systolic blood pressure in rural residents. Although there was no evidence of an association at a national level, associations of salt consumption with blood pressure differed between urban and rural residents in Viet Nam. The reasons for this differential association are not clear, and given the large rate of rural to urban migration experienced in Viet Nam, this topic warrants further investigation.

  19. Changes in mean serum lipids among adults in Germany: results from National Health Surveys 1997-99 and 2008-11

    Directory of Open Access Journals (Sweden)

    Julia Truthmann

    2016-03-01

    Full Text Available Abstract Background Monitoring of serum lipid concentrations at the population level is an important public health tool to describe progress in cardiovascular disease risk control and prevention. Using data from two nationally representative health surveys of adults 18–79 years, this study identified changes in mean serum total cholesterol (TC, high-density lipoprotein cholesterol (HDL-C, and triglycerides (TG in relation to changes in potential determinants of serum lipids between 1997–99 and 2008–11 in Germany. Methods Sex-specific multivariable linear regression analyses were performed with serum lipids as dependent variables and survey wave as independent variable and adjusted for the following covariables: age, fasting duration, educational status, lifestyle, and use of medication. Results Mean TC declined between the two survey periods by 13 % (5.97 mmol/l vs. 5.19 mmol/l among men and by 12 % (6.03 mmol/l vs. 5.30 mmol/l among women. Geometric mean TG decreased by 14 % (1.66 mmol/l vs. 1.42 mmol/l among men and by 8 % (1.20 mmol/l vs. 1.10 mmol/l among women. Mean HDL-C remained unchanged among men (1.29 mmol/l vs. 1.27 mmol/l, but decreased by 5 % among women (1.66 mmol/l vs. 1.58 mmol/l. Sports activity and coffee consumption increased, while smoking and high alcohol consumption decreased only in men. Processed food consumption increased and wholegrain bread consumption decreased in both sexes, and obesity increased among men. The use of lipid-lowering medication, in particular statins nearly doubled over time in both sexes. Among women, hormonal contraceptive use increased and postmenopausal hormone therapy halved over time. The changes in lipid levels between surveys remained significant after adjusting for covariables. Conclusion Serum TC and TG considerably declined over one decade in Germany, which can be partly explained by increased use of lipid-lowering medication and improved lifestyle among men. The

  20. Estimating leptospirosis incidence using hospital-based surveillance and a population-based health care utilization survey in Tanzania.

    Directory of Open Access Journals (Sweden)

    Holly M Biggs

    Full Text Available The incidence of leptospirosis, a neglected zoonotic disease, is uncertain in Tanzania and much of sub-Saharan Africa, resulting in scarce data on which to prioritize resources for public health interventions and disease control. In this study, we estimate the incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania.We conducted a population-based household health care utilization survey in two districts in the Kilimanjaro Region of Tanzania and identified leptospirosis cases at two hospital-based fever sentinel surveillance sites in the Kilimanjaro Region. We used multipliers derived from the health care utilization survey and case numbers from hospital-based surveillance to calculate the incidence of leptospirosis. A total of 810 households were enrolled in the health care utilization survey and multipliers were derived based on responses to questions about health care seeking in the event of febrile illness. Of patients enrolled in fever surveillance over a 1 year period and residing in the 2 districts, 42 (7.14% of 588 met the case definition for confirmed or probable leptospirosis. After applying multipliers to account for hospital selection, test sensitivity, and study enrollment, we estimated the overall incidence of leptospirosis ranges from 75-102 cases per 100,000 persons annually.We calculated a high incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania, where leptospirosis incidence was previously unknown. Multiplier methods, such as used in this study, may be a feasible method of improving availability of incidence estimates for neglected diseases, such as leptospirosis, in resource constrained settings.

  1. Estimating Leptospirosis Incidence Using Hospital-Based Surveillance and a Population-Based Health Care Utilization Survey in Tanzania

    Science.gov (United States)

    Biggs, Holly M.; Hertz, Julian T.; Munishi, O. Michael; Galloway, Renee L.; Marks, Florian; Saganda, Wilbrod; Maro, Venance P.; Crump, John A.

    2013-01-01

    Background The incidence of leptospirosis, a neglected zoonotic disease, is uncertain in Tanzania and much of sub-Saharan Africa, resulting in scarce data on which to prioritize resources for public health interventions and disease control. In this study, we estimate the incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania. Methodology/Principal Findings We conducted a population-based household health care utilization survey in two districts in the Kilimanjaro Region of Tanzania and identified leptospirosis cases at two hospital-based fever sentinel surveillance sites in the Kilimanjaro Region. We used multipliers derived from the health care utilization survey and case numbers from hospital-based surveillance to calculate the incidence of leptospirosis. A total of 810 households were enrolled in the health care utilization survey and multipliers were derived based on responses to questions about health care seeking in the event of febrile illness. Of patients enrolled in fever surveillance over a 1 year period and residing in the 2 districts, 42 (7.14%) of 588 met the case definition for confirmed or probable leptospirosis. After applying multipliers to account for hospital selection, test sensitivity, and study enrollment, we estimated the overall incidence of leptospirosis ranges from 75–102 cases per 100,000 persons annually. Conclusions/Significance We calculated a high incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania, where leptospirosis incidence was previously unknown. Multiplier methods, such as used in this study, may be a feasible method of improving availability of incidence estimates for neglected diseases, such as leptospirosis, in resource constrained settings. PMID:24340122

  2. Modeling Site Heterogeneity with Posterior Mean Site Frequency Profiles Accelerates Accurate Phylogenomic Estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Minh, Bui Quang; Susko, Edward; Roger, Andrew J

    2018-03-01

    Proteins have distinct structural and functional constraints at different sites that lead to site-specific preferences for particular amino acid residues as the sequences evolve. Heterogeneity in the amino acid substitution process between sites is not modeled by commonly used empirical amino acid exchange matrices. Such model misspecification can lead to artefacts in phylogenetic estimation such as long-branch attraction. Although sophisticated site-heterogeneous mixture models have been developed to address this problem in both Bayesian and maximum likelihood (ML) frameworks, their formidable computational time and memory usage severely limits their use in large phylogenomic analyses. Here we propose a posterior mean site frequency (PMSF) method as a rapid and efficient approximation to full empirical profile mixture models for ML analysis. The PMSF approach assigns a conditional mean amino acid frequency profile to each site calculated based on a mixture model fitted to the data using a preliminary guide tree. These PMSF profiles can then be used for in-depth tree-searching in place of the full mixture model. Compared with widely used empirical mixture models with $k$ classes, our implementation of PMSF in IQ-TREE (http://www.iqtree.org) speeds up the computation by approximately $k$/1.5-fold and requires a small fraction of the RAM. Furthermore, this speedup allows, for the first time, full nonparametric bootstrap analyses to be conducted under complex site-heterogeneous models on large concatenated data matrices. Our simulations and empirical data analyses demonstrate that PMSF can effectively ameliorate long-branch attraction artefacts. In some empirical and simulation settings PMSF provided more accurate estimates of phylogenies than the mixture models from which they derive.

  3. A Parametric k-Means Algorithm

    Science.gov (United States)

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  4. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  5. Comparative analysis of old-age mortality estimations in Africa.

    Directory of Open Access Journals (Sweden)

    Eran Bendavid

    Full Text Available Survival to old ages is increasing in many African countries. While demographic tools for estimating mortality up to age 60 have improved greatly, mortality patterns above age 60 rely on models based on little or no demographic data. These estimates are important for social planning and demographic projections. We provide direct estimations of older-age mortality using survey data.Since 2005, nationally representative household surveys in ten sub-Saharan countries record counts of living and recently deceased household members: Burkina Faso, Côte d'Ivoire, Ethiopia, Namibia, Nigeria, Swaziland, Tanzania, Uganda, Zambia, and Zimbabwe. After accounting for age heaping using multiple imputation, we use this information to estimate probability of death in 5-year intervals ((5q(x. We then compare our (5q(x estimates to those provided by the World Health Organization (WHO and the United Nations Population Division (UNPD to estimate the differences in mortality estimates, especially among individuals older than 60 years old.We obtained information on 505,827 individuals (18.4% over age 60, 1.64% deceased. WHO and UNPD mortality models match our estimates closely up to age 60 (mean difference in probability of death -1.1%. However, mortality probabilities above age 60 are lower using our estimations than either WHO or UNPD. The mean difference between our sample and the WHO is 5.9% (95% CI 3.8-7.9% and between our sample is UNPD is 13.5% (95% CI 11.6-15.5%. Regardless of the comparator, the difference in mortality estimations rises monotonically above age 60.Mortality estimations above age 60 in ten African countries exhibit large variations depending on the method of estimation. The observed patterns suggest the possibility that survival in some African countries among adults older than age 60 is better than previously thought. Improving the quality and coverage of vital information in developing countries will become increasingly important with

  6. Off-road sampling reveals a different grassland bird community than roadside sampling: implications for survey design and estimates to guide conservation

    Directory of Open Access Journals (Sweden)

    Troy I. Wellicome

    2014-06-01

    Full Text Available Grassland bird species continue to decline steeply across North America. Road-based surveys such as the North American Breeding Bird Survey (BBS are often used to estimate trends and population sizes and to build species distribution models for grassland birds, although roadside survey counts may introduce bias in estimates because of differences in habitats along roadsides and in off-road surveys. We tested for differences in land cover composition and in the avian community on 21 roadside-based survey routes and in an equal number of adjacent off-road walking routes in the grasslands of southern Alberta, Canada. Off-road routes (n = 225 point counts had more native grassland and short shrubs and less fallow land and road area than the roadside routes (n = 225 point counts. Consequently, 17 of the 39 bird species differed between the two route types in frequency of occurrence and relative abundance, measured using an indicator species analysis. Six species, including five obligate grassland species, were more prevalent at off-road sites; they included four species listed under the Canadian federal Species At Risk Act or listed by the Committee on the Status of Endangered Wildlife in Canada: Sprague's Pipit (Anthus spragueii, Baird's Sparrow (Ammodramus bairdii, the Chestnut-collared Longspur (Calcarius ornatus, and McCown's Longspur (Rhynchophanes mccownii. The six species were as much as four times more abundant on off-road sites. Species more prevalent along roadside routes included common species and those typical of farmland and other human-modified habitats, e.g., the European Starling (Sturnus vulgaris, the Black-billed Magpie (Pica hudsonia, and the House Sparrow (Passer domesticus. Differences in avian community composition between roadside and off-road surveys suggest that the use of BBS data when generating population estimates or distribution models may overestimate certain common species and underestimate others of conservation

  7. Surveying wolves without snow: a critical review of the methods used in Spain

    OpenAIRE

    Blanco, Juan Carlos; Cortés, Yolanda

    2011-01-01

    Wolves (Canis lupus) are difficult to survey, and in most countries, snow is used for identifying the species, counting individuals, recording movements and determining social position. However, in the Iberian peninsula and other southern regions of its gobal range, snow is very scarce in winter, so wolves must be surveyed without snow. In Spain and Portugal, wolves are surveyed through estimating number of wolf packs in summer by means of locating litters of pups when they are at rendezvous ...

  8. A statistical model for estimation of fish density including correlation in size, space, time and between species from research survey data

    DEFF Research Database (Denmark)

    Nielsen, J. Rasmus; Kristensen, Kasper; Lewy, Peter

    2014-01-01

    Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes...

  9. Estimating the mean and variance of measurements from serial radioactive decay schemes with emphasis on 222Rn and its short-lived progeny

    International Nuclear Information System (INIS)

    Inkret, W.C.; Borak, T.B.; Boes, D.C.

    1990-01-01

    Classically, the mean and variance of radioactivity measurements are estimated from poisson distributions. However, the random distribution of observed events is not poisson when the half-life is short compared with the interval of observation or when more than one event can be associated with a single initial atom. Procedures were developed to estimate the mean and variance of single measurements of serial radioactive processes. Results revealed that observations from the three consecutive alpha emissions beginning with 222 Rn are positively correlated. Since the poisson estimator ignores covariance terms, it underestimates the true variance of the measurement. The reverse is true for mixtures of radon daughters only. (author)

  10. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  11. Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities

    International Nuclear Information System (INIS)

    Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.; Salas, E.; Martin, N.

    2008-01-01

    The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has been included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys

  12. Robust w-Estimators for Cryo-EM Class Means

    Science.gov (United States)

    Huang, Chenxi; Tagare, Hemant D.

    2016-01-01

    A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397

  13. SU-F-T-244: Radiotherapy Risk Estimation Based On Expert Group Survey

    International Nuclear Information System (INIS)

    Koo, J; Yoon, M; Chung, W; Chung, M; Kim, D

    2016-01-01

    Purpose: To evaluate the reliability of RPN (Risk Priority Number) decided by expert group and to provide preliminary data for adapting FMEA in Korea. Methods: 1163 Incidents reported in ROSIS for 11 years were used as a real data to be compared with, and were categorized into 146 items. The questionnaire was composed of the 146 items and respondents had to valuate ‘occurrence (O)’, ‘severity (S)’, ‘detectability (D)’ of each item on a scale from 1 to 10 according to the proposed AAPM TG-100 rating scales. 19 medical physicists from 19 different organizations in Korea had participated in the survey. Because the number of ROSIS items was not evenly spread enough to be classified into 10 grades, 1–5 scale was chosen instead of 1–10 and survey result was also fit to 5 grades to compare. Results: The average O,S,D were 1.77, 3.50, 2.13, respectively and the item which had the highest RPN(32) was ‘patient movement during treatment’ in the survey. When comparing items ranked in the top 10 of each survey(O) and ROSIS database, two items were duplicated and ‘Simulation’ and ’Treatment’ were the most frequently ranked RT process in top 10 of survey and ROSIS each. The Chronbach α of each RT process were ranged from 0.74 to 0.99 and p-value was <0.001. When comparing O*D, the average difference was 1.4. Conclusion: This work indicates the deviation between actual risk and expectation. Considering that the respondents were Korean and ROSIS is mainly composed of incidents happened in European countries and some of the top 10 items of ROSIS cannot be applied in radiotherapy procedure in Korea, the deviation could have been came from procedural difference. Moreover, if expert group was consisted of experts from various parts, expectation might have been more accurate. Therefore, further research on radiotherapy risk estimation is needed.

  14. SU-F-T-244: Radiotherapy Risk Estimation Based On Expert Group Survey

    Energy Technology Data Exchange (ETDEWEB)

    Koo, J; Yoon, M [Korea University, Seoul (Korea, Republic of); Chung, W; Chung, M; Kim, D [Kyung Hee University Hospital at Gangdong, Gangdong-gu, Seoul (Korea, Republic of)

    2016-06-15

    Purpose: To evaluate the reliability of RPN (Risk Priority Number) decided by expert group and to provide preliminary data for adapting FMEA in Korea. Methods: 1163 Incidents reported in ROSIS for 11 years were used as a real data to be compared with, and were categorized into 146 items. The questionnaire was composed of the 146 items and respondents had to valuate ‘occurrence (O)’, ‘severity (S)’, ‘detectability (D)’ of each item on a scale from 1 to 10 according to the proposed AAPM TG-100 rating scales. 19 medical physicists from 19 different organizations in Korea had participated in the survey. Because the number of ROSIS items was not evenly spread enough to be classified into 10 grades, 1–5 scale was chosen instead of 1–10 and survey result was also fit to 5 grades to compare. Results: The average O,S,D were 1.77, 3.50, 2.13, respectively and the item which had the highest RPN(32) was ‘patient movement during treatment’ in the survey. When comparing items ranked in the top 10 of each survey(O) and ROSIS database, two items were duplicated and ‘Simulation’ and ’Treatment’ were the most frequently ranked RT process in top 10 of survey and ROSIS each. The Chronbach α of each RT process were ranged from 0.74 to 0.99 and p-value was <0.001. When comparing O*D, the average difference was 1.4. Conclusion: This work indicates the deviation between actual risk and expectation. Considering that the respondents were Korean and ROSIS is mainly composed of incidents happened in European countries and some of the top 10 items of ROSIS cannot be applied in radiotherapy procedure in Korea, the deviation could have been came from procedural difference. Moreover, if expert group was consisted of experts from various parts, expectation might have been more accurate. Therefore, further research on radiotherapy risk estimation is needed.

  15. Mean--variance portfolio optimization when means and covariances are unknown

    OpenAIRE

    Tze Leung Lai; Haipeng Xing; Zehao Chen

    2011-01-01

    Markowitz's celebrated mean--variance portfolio optimization theory assumes that the means and covariances of the underlying asset returns are known. In practice, they are unknown and have to be estimated from historical data. Plugging the estimates into the efficient frontier that assumes known parameters has led to portfolios that may perform poorly and have counter-intuitive asset allocation weights; this has been referred to as the "Markowitz optimization enigma." After reviewing differen...

  16. Estimating flood discharge using witness movies in post-flood hydrological surveys

    Science.gov (United States)

    Le Coz, Jérôme; Hauet, Alexandre; Le Boursicaud, Raphaël; Pénard, Lionel; Bonnifait, Laurent; Dramais, Guillaume; Thollet, Fabien; Braud, Isabelle

    2015-04-01

    The estimation of streamflow rates based on post-flood surveys is of paramount importance for the investigation of extreme hydrological events. Major uncertainties usually arise from the absence of information on the flow velocities and from the limited spatio-temporal resolution of such surveys. Nowadays, after each flood occuring in populated areas home movies taken from bridges, river banks or even drones are shared by witnesses through Internet platforms like YouTube. Provided that some topography data and additional information are collected, image-based velocimetry techniques can be applied to some of these movie materials, in order to estimate flood discharges. As a contribution to recent post-flood surveys conducted in France, we developed and applied a method for estimating velocities and discharges based on the Large Scale Particle Image Velocimetry (LSPIV) technique. Since the seminal work of Fujita et al. (1998), LSPIV applications to river flows were reported by a number of authors and LSPIV can now be considered a mature technique. However, its application to non-professional movies taken by flood witnesses remains challenging and required some practical developments. The different steps to apply LSPIV analysis to a flood home movie are as follows: (i) select a video of interest; (ii) contact the author for agreement and extra information; (iii) conduct a field topography campaign to georeference Ground Control Points (GCPs), water level and cross-sectional profiles; (iv) preprocess the video before LSPIV analysis: correct lens distortion, align the images, etc.; (v) orthorectify the images to correct perspective effects and know the physical size of pixels; (vi) proceed with the LSPIV analysis to compute the surface velocity field; and (vii) compute discharge according to a user-defined velocity coefficient. Two case studies in French mountainous rivers during extreme floods are presented. The movies were collected on YouTube and field topography

  17. Computer Programs for Obtaining and Analyzing Daily Mean Steamflow Data from the U.S. Geological Survey National Water Information System Web Site

    Science.gov (United States)

    Granato, Gregory E.

    2009-01-01

    Research Council, 2004). The USGS maintains the National Water Information System (NWIS), a distributed network of computers and file servers used to store and retrieve hydrologic data (Mathey, 1998; U.S. Geological Survey, 2008). NWISWeb is an online version of this database that includes water data from more than 24,000 streamflow-gaging stations throughout the United States (U.S. Geological Survey, 2002, 2008). Information from NWISWeb is commonly used to characterize streamflows at gaged sites and to help predict streamflows at ungaged sites. Five computer programs were developed for obtaining and analyzing streamflow from the National Water Information System (NWISWeb). The programs were developed as part of a study by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, to develop a stochastic empirical loading and dilution model. The programs were developed because reliable, efficient, and repeatable methods are needed to access and process streamflow information and data. The first program is designed to facilitate the downloading and reformatting of NWISWeb streamflow data. The second program is designed to facilitate graphical analysis of streamflow data. The third program is designed to facilitate streamflow-record extension and augmentation to help develop long-term statistical estimates for sites with limited data. The fourth program is designed to facilitate statistical analysis of streamflow data. The fifth program is a preprocessor to create batch input files for the U.S. Environmental Protection Agency DFLOW3 program for calculating low-flow statistics. These computer programs were developed to facilitate the analysis of daily mean streamflow data for planning-level water-quality analyses but also are useful for many other applications pertaining to streamflow data and statistics. These programs and the associated documentation are included on the CD-ROM accompanying this report. This report and the appendixes on the

  18. Estimation of muscle fatigue by ratio of mean frequency to average rectified value from surface electromyography.

    Science.gov (United States)

    Fernando, Jeffry Bonar; Yoshioka, Mototaka; Ozawa, Jun

    2016-08-01

    A new method to estimate muscle fatigue quantitatively from surface electromyography (EMG) is proposed. The ratio of mean frequency (MNF) to average rectified value (ARV) is used as the index of muscle fatigue, and muscle fatigue is detected when MNF/ARV falls below a pre-determined or pre-calculated baseline. MNF/ARV gives larger distinction between fatigued muscle and non-fatigued muscle. Experiment results show the effectiveness of our method in estimating muscle fatigue more correctly compared to conventional methods. An early evaluation based on the initial value of MNF/ARV and the subjective time when the subjects start feeling the fatigue also indicates the possibility of calculating baseline from the initial value of MNF/ARV.

  19. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    Directory of Open Access Journals (Sweden)

    Darren Kidney

    Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will

  20. THE NEXT GENERATION VIRGO CLUSTER SURVEY. XV. THE PHOTOMETRIC REDSHIFT ESTIMATION FOR BACKGROUND SOURCES

    International Nuclear Information System (INIS)

    Raichoor, A.; Mei, S.; Huertas-Company, M.; Licitra, R.; Erben, T.; Hildebrandt, H.; Ilbert, O.; Boissier, S.; Boselli, A.; Ball, N. M.; Côté, P.; Ferrarese, L.; Gwyn, S. D. J.; Kavelaars, J. J.; Chen, Y.-T.; Cuillandre, J.-C.; Duc, P. A.; Durrell, P. R.; Guhathakurta, P.; Lançon, A.

    2014-01-01

    The Next Generation Virgo Cluster Survey (NGVS) is an optical imaging survey covering 104 deg 2 centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz bands and one third in the r band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point-spread function homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior that extends to i AB = 12.5 mag. When using the u* griz bands, our photometric redshifts for 15.5 mag ≤ i ≲ 23 mag or z phot ≲ 1 galaxies have a bias |Δz| < 0.02, less than 5% outliers, a scatter σ outl.rej. , and an individual error on z phot that increases with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz bands over the same magnitude and redshift range, the lack of the r band increases the uncertainties in the 0.3 ≲ z phot ≲ 0.8 range (–0.05 < Δz < –0.02, σ outl.rej ∼ 0.06, 10%-15% outliers, and z phot.err. ∼ 0.15). We also present a joint analysis of the photometric redshift accuracy as a function of redshift and magnitude. We assess the quality of our photometric redshifts by comparison to spectroscopic samples and by verifying that the angular auto- and cross-correlation function w(θ) of the entire NGVS photometric redshift sample across redshift bins is in agreement with the expectations

  1. THE NEXT GENERATION VIRGO CLUSTER SURVEY. XV. THE PHOTOMETRIC REDSHIFT ESTIMATION FOR BACKGROUND SOURCES

    Energy Technology Data Exchange (ETDEWEB)

    Raichoor, A.; Mei, S.; Huertas-Company, M.; Licitra, R. [GEPI, Observatoire de Paris, CNRS, Université Paris Diderot, 61 Avenue de l' Observatoire, F-75014 Paris (France); Erben, T.; Hildebrandt, H. [Argelander-Institut für Astronomie, University of Bonn, Auf dem Hügel 71, D-53121 Bonn (Germany); Ilbert, O.; Boissier, S.; Boselli, A. [Aix Marseille Université, CNRS, Laboratoire d' Astrophysique de Marseille, UMR 7326, F-13388 Marseille (France); Ball, N. M.; Côté, P.; Ferrarese, L.; Gwyn, S. D. J.; Kavelaars, J. J. [Herzberg Institute of Astrophysics, National Research Council of Canada, Victoria, BC V9E 2E7 (Canada); Chen, Y.-T. [Insitute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 106, Taiwan (China); Cuillandre, J.-C. [Canada-France-Hawaïi Telescope Corporation, Kamuela, HI 96743 (United States); Duc, P. A. [Laboratoire AIM Paris-Saclay, CEA/IRFU/SAp, CNRS/INSU, Université Paris Diderot, F-91191 Gif-sur-Yvette Cedex (France); Durrell, P. R. [Department of Physics and Astronomy, Youngstown State University, Youngstown, OH 44555 (United States); Guhathakurta, P. [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Lançon, A., E-mail: anand.raichoor@obspm.fr [Observatoire Astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l' Université, F-67000 Strasbourg (France); and others

    2014-12-20

    The Next Generation Virgo Cluster Survey (NGVS) is an optical imaging survey covering 104 deg{sup 2} centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz bands and one third in the r band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point-spread function homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior that extends to i {sub AB} = 12.5 mag. When using the u* griz bands, our photometric redshifts for 15.5 mag ≤ i ≲ 23 mag or z {sub phot} ≲ 1 galaxies have a bias |Δz| < 0.02, less than 5% outliers, a scatter σ{sub outl.rej.}, and an individual error on z {sub phot} that increases with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz bands over the same magnitude and redshift range, the lack of the r band increases the uncertainties in the 0.3 ≲ z {sub phot} ≲ 0.8 range (–0.05 < Δz < –0.02, σ{sub outl.rej} ∼ 0.06, 10%-15% outliers, and z {sub phot.err.} ∼ 0.15). We also present a joint analysis of the photometric redshift accuracy as a function of redshift and magnitude. We assess the quality of our photometric redshifts by comparison to spectroscopic samples and by verifying that the angular auto- and cross-correlation function w(θ) of the entire NGVS photometric redshift sample across redshift bins is in agreement with the expectations.

  2. Something from nothing: Estimating consumption rates using propensity scores, with application to emissions reduction policies.

    Directory of Open Access Journals (Sweden)

    Nicholas Bardsley

    Full Text Available Consumption surveys often record zero purchases of a good because of a short observation window. Measures of distribution are then precluded and only mean consumption rates can be inferred. We show that Propensity Score Matching can be applied to recover the distribution of consumption rates. We demonstrate the method using the UK National Travel Survey, in which c.40% of motorist households purchase no fuel. Estimated consumption rates are plausible judging by households' annual mileages, and highly skewed. We apply the same approach to estimate CO2 emissions and outcomes of a carbon cap or tax. Reliance on means apparently distorts analysis of such policies because of skewness of the underlying distributions. The regressiveness of a simple tax or cap is overstated, and redistributive features of a revenue-neutral policy are understated.

  3. Using heat as a tracer to estimate spatially distributed mean residence times in the hyporheic zone of a riffle-pool sequence

    Science.gov (United States)

    Naranjo, Ramon C.

    2013-01-01

    Biochemical reactions that occur in the hyporheic zone are highly dependent on the time solutes that are in contact with sediments of the riverbed. In this investigation, we developed a 2-D longitudinal flow and solute-transport model to estimate the spatial distribution of mean residence time in the hyporheic zone. The flow model was calibrated using observations of temperature and pressure, and the mean residence times were simulated using the age-mass approach for steady-state flow conditions. The approach used in this investigation includes the mixing of different ages and flow paths of water through advection and dispersion. Uncertainty of flow and transport parameters was evaluated using standard Monte Carlo and the generalized likelihood uncertainty estimation method. Results of parameter estimation support the presence of a low-permeable zone in the riffle area that induced horizontal flow at a shallow depth within the riffle area. This establishes shallow and localized flow paths and limits deep vertical exchange. For the optimal model, mean residence times were found to be relatively long (9–40.0 days). The uncertainty of hydraulic conductivity resulted in a mean interquartile range (IQR) of 13 days across all piezometers and was reduced by 24% with the inclusion of temperature and pressure observations. To a lesser extent, uncertainty in streambed porosity and dispersivity resulted in a mean IQR of 2.2 and 4.7 days, respectively. Alternative conceptual models demonstrate the importance of accounting for the spatial distribution of hydraulic conductivity in simulating mean residence times in a riffle-pool sequence.

  4. Studies on risk estimation to public from medical radiation (III)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hai Yong; Kim, Jong Hyung; Kim, Hyeog Ju; Kim, Ji Soon; Oh, Hyeon Joo; Kim, Cheol Hyeon; Yang, Hyun Kyu [Korea Food and Drug Administraion, Seoul (Korea, Republic of); Park, Chan Il [Seoul National Univ., Seoul (Korea, Republic of)

    1998-06-01

    A nationwide survey was conducted to give representative levels of effective doses to patient for 17 types of CT examination and also representative level of MGD (mean glandular dose) to standard breast for mammography X-ray equipment. The effective doses to patient from 16 CT scanners were estimated from measurement of CTDI (Computed Tomography Dose Index) in air by multiplying conversion coefficients which are specified by National Radiological Protection Board in United Kingdom. The lowest and hightest mean values of effective dose measured to patient from CT scanner were 0.05 mSv for IAM examination and 17.75 mSv for routine abdomen examination, respectively. The average values of 17 effective doses were lower than other results of foreign countrys' surveys. The mean glandular doses to a standard breast for 26 mammography units were estimated from measurement of the air kerma at the surface of a 40 mm plain Perspex phantom by applying conversion factors described in Report 59 of the Institute of Physical Sciences in Medicine of United Kingdom. The exposure factors for this measurement were those used clinically at each hospital. The average MGD to standard breast was 1.06 mGy in units with grid and 0.49 mGy in units without grid. These results are lower than guidance levels by IPSM and AAPM. These results will be used for risk estimation to the Korean public from the medical radiation.

  5. Cystic echinococcosis in marketed offal of sheep in Basrah, Iraq: Abattoir-based survey and a probabilistic model estimation of the direct economic losses due to hydatid cyst.

    Science.gov (United States)

    Abdulhameed, Mohanad F; Habib, Ihab; Al-Azizz, Suzan A; Robertson, Ian

    2018-02-01

    Cystic echinococcosis (CE) is a highly endemic parasitic zoonosis in Iraq with substantial impacts on livestock productivity and human health. The objectives of this study were to study the abattoir-based occurrence of CE in marketed offal of sheep in Basrah province, Iraq, and to estimate, using a probabilistic modelling approach, the direct economic losses due to hydatid cysts. Based on detailed visual meat inspection, results from an active abattoir survey in this study revealed detection of hydatid cysts in 7.3% (95% CI: 5.4; 9.6) of 631 examined sheep carcasses. Post-mortem lesions of hydatid cyst were concurrently present in livers and lungs of more than half (54.3% (25/46)) of the positive sheep. Direct economic losses due to hydatid cysts in marketed offal were estimated using data from government reports, the one abattoir survey completed in this study, and expert opinions of local veterinarians and butchers. A Monte-Carlo simulation model was developed in a spreadsheet utilizing Latin Hypercube sampling to account for uncertainty in the input parameters. The model estimated that the average annual economic losses associated with hydatid cysts in the liver and lungs of sheep marketed for human consumption in Basrah to be US$72,470 (90% Confidence Interval (CI); ±11,302). The mean proportion of annual losses in meat products value (carcasses and offal) due to hydatid cysts in the liver and lungs of sheep marketed in Basrah province was estimated as 0.42% (90% CI; ±0.21). These estimates suggest that CE is responsible for considerable livestock-associated monetary losses in the south of Iraq. These findings can be used to inform different regional CE control program options in Iraq.

  6. A spectral chart method for estimating the mean turbulent kinetic energy dissipation rate

    Science.gov (United States)

    Djenidi, L.; Antonia, R. A.

    2012-10-01

    We present an empirical but simple and practical spectral chart method for determining the mean turbulent kinetic energy dissipation rate DNS spectra, points to this scaling being also valid at small Reynolds numbers, provided effects due to inhomogeneities in the flow are negligible. The methods avoid the difficulty associated with estimating time or spatial derivatives of the velocity fluctuations. It also avoids using the second hypothesis of K41, which implies the existence of a -5/3 inertial subrange only when the Taylor microscale Reynods number R λ is sufficiently large. The method is in fact applied to the lower wavenumber end of the dissipative range thus avoiding most of the problems due to inadequate spatial resolution of the velocity sensors and noise associated with the higher wavenumber end of this range.The use of spectral data (30 ≤ R λ ≤ 400) in both passive and active grid turbulence, a turbulent mixing layer and the turbulent wake of a circular cylinder indicates that the method is robust and should lead to reliable estimates of < \\varepsilon rangle in flows or flow regions where the first similarity hypothesis should hold; this would exclude, for example, the region near a wall.

  7. Use of Bayesian networks classifiers for long-term mean wind turbine energy output estimation at a potential wind energy conversion site

    Energy Technology Data Exchange (ETDEWEB)

    Carta, Jose A. [Department of Mechanical Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Velazquez, Sergio [Department of Electronics and Automatics Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Matias, J.M. [Department of Statistics, University of Vigo, Lagoas Marcosende, 36200 Vigo (Spain)

    2011-02-15

    Due to the interannual variability of wind speed a feasibility analysis for the installation of a Wind Energy Conversion System at a particular site requires estimation of the long-term mean wind turbine energy output. A method is proposed in this paper which, based on probabilistic Bayesian networks (BNs), enables estimation of the long-term mean wind speed histogram for a site where few measurements of the wind resource are available. For this purpose, the proposed method allows the use of multiple reference stations with a long history of wind speed and wind direction measurements. That is to say, the model that is proposed in this paper is able to involve and make use of regional information about the wind resource. With the estimated long-term wind speed histogram and the power curve of a wind turbine it is possible to use the method of bins to determine the long-term mean energy output for that wind turbine. The intelligent system employed, the knowledgebase of which is a joint probability function of all the model variables, uses efficient calculation techniques for conditional probabilities to perform the reasoning. This enables automatic model learning and inference to be performed efficiently based on the available evidence. The proposed model is applied in this paper to wind speeds and wind directions recorded at four weather stations located in the Canary Islands (Spain). Ten years of mean hourly wind speed and direction data are available for these stations. One of the conclusions reached is that the BN with three reference stations gave fewer errors between the real and estimated long-term mean wind turbine energy output than when using two measure-correlate-predict algorithms which were evaluated and which use a linear regression between the candidate station and one reference station. (author)

  8. Use of Bayesian networks classifiers for long-term mean wind turbine energy output estimation at a potential wind energy conversion site

    International Nuclear Information System (INIS)

    Carta, Jose A.; Velazquez, Sergio; Matias, J.M.

    2011-01-01

    Due to the interannual variability of wind speed a feasibility analysis for the installation of a Wind Energy Conversion System at a particular site requires estimation of the long-term mean wind turbine energy output. A method is proposed in this paper which, based on probabilistic Bayesian networks (BNs), enables estimation of the long-term mean wind speed histogram for a site where few measurements of the wind resource are available. For this purpose, the proposed method allows the use of multiple reference stations with a long history of wind speed and wind direction measurements. That is to say, the model that is proposed in this paper is able to involve and make use of regional information about the wind resource. With the estimated long-term wind speed histogram and the power curve of a wind turbine it is possible to use the method of bins to determine the long-term mean energy output for that wind turbine. The intelligent system employed, the knowledgebase of which is a joint probability function of all the model variables, uses efficient calculation techniques for conditional probabilities to perform the reasoning. This enables automatic model learning and inference to be performed efficiently based on the available evidence. The proposed model is applied in this paper to wind speeds and wind directions recorded at four weather stations located in the Canary Islands (Spain). Ten years of mean hourly wind speed and direction data are available for these stations. One of the conclusions reached is that the BN with three reference stations gave fewer errors between the real and estimated long-term mean wind turbine energy output than when using two measure-correlate-predict algorithms which were evaluated and which use a linear regression between the candidate station and one reference station.

  9. Survey of radiopharmaceuticals used for in vivo studies in medical practice in New Zealand

    International Nuclear Information System (INIS)

    McEwan, A.C.; Smyth, V.G.

    1984-01-01

    To obtain up-to-date information on numbers and types of radiopharmaceutical procedures, a survey was undertaken in the last quarter of 1983. In conjunction with this survey dosimetry data for the range of radiopharmaceutical procedures has been reviewed and extended where necessary so that effective dose equivalents could be estimated and mean genetically significant and malignancy significant doses for the population derived

  10. Time-varying effect moderation using the structural nested mean model: estimation using inverse-weighted regression with residuals

    Science.gov (United States)

    Almirall, Daniel; Griffin, Beth Ann; McCaffrey, Daniel F.; Ramchand, Rajeev; Yuen, Robert A.; Murphy, Susan A.

    2014-01-01

    This article considers the problem of examining time-varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time-varying causal effects of interest in a conditional mean model for a continuous response given time-varying treatments and moderators. We present an easy-to-use estimator of the SNMM that combines an existing regression-with-residuals (RR) approach with an inverse-probability-of-treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time-varying causal effects if the time-varying moderators are also the sole time-varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time-varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time-varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time-varying moderators and time-varying confounders. We illustrate the methodology in a case study to assess if time-varying substance use moderates treatment effects on future substance use. PMID:23873437

  11. Variable selection and estimation for longitudinal survey data

    KAUST Repository

    Wang, Li; Wang, Suojin; Wang, Guannan

    2014-01-01

    There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships

  12. Inverse probability weighting and doubly robust methods in correcting the effects of non-response in the reimbursed medication and self-reported turnout estimates in the ATH survey.

    Science.gov (United States)

    Härkänen, Tommi; Kaikkonen, Risto; Virtala, Esa; Koskinen, Seppo

    2014-11-06

    To assess the nonresponse rates in a questionnaire survey with respect to administrative register data, and to correct the bias statistically. The Finnish Regional Health and Well-being Study (ATH) in 2010 was based on a national sample and several regional samples. Missing data analysis was based on socio-demographic register data covering the whole sample. Inverse probability weighting (IPW) and doubly robust (DR) methods were estimated using the logistic regression model, which was selected using the Bayesian information criteria. The crude, weighted and true self-reported turnout in the 2008 municipal election and prevalences of entitlements to specially reimbursed medication, and the crude and weighted body mass index (BMI) means were compared. The IPW method appeared to remove a relatively large proportion of the bias compared to the crude prevalence estimates of the turnout and the entitlements to specially reimbursed medication. Several demographic factors were shown to be associated with missing data, but few interactions were found. Our results suggest that the IPW method can improve the accuracy of results of a population survey, and the model selection provides insight into the structure of missing data. However, health-related missing data mechanisms are beyond the scope of statistical methods, which mainly rely on socio-demographic information to correct the results.

  13. Learning lessons from field surveys in humanitarian contexts: a case study of field surveys conducted in North Kivu, DRC 2006-2008

    Directory of Open Access Journals (Sweden)

    Grellety Emmanuel

    2009-09-01

    Full Text Available Abstract Survey estimates of mortality and malnutrition are commonly used to guide humanitarian decision-making. Currently, different methods of conducting field surveys are the subject of debate among epidemiologists. Beyond the technical arguments, decision makers may find it difficult to conceptualize what the estimates actually mean. For instance, what makes this particular situation an emergency? And how should the operational response be adapted accordingly. This brings into question not only the quality of the survey methodology, but also the difficulties epidemiologists face in interpreting results and selecting the most important information to guide operations. As a case study, we reviewed mortality and nutritional surveys conducted in North Kivu, Democratic Republic of Congo (DRC published from January 2006 to January 2009. We performed a PubMed/Medline search for published articles and scanned publicly available humanitarian databases and clearinghouses for grey literature. To evaluate the surveys, we developed minimum reporting criteria based on available guidelines and selected peer-review articles. We identified 38 reports through our search strategy; three surveys met our inclusion criteria. The surveys varied in methodological quality. Reporting against minimum criteria was generally good, but presentation of ethical procedures, raw data and survey limitations were missed in all surveys. All surveys also failed to consider contextual factors important for data interpretation. From this review, we conclude that mechanisms to ensure sound survey design and conduct must be implemented by operational organisations to improve data quality and reporting. Training in data interpretation would also be useful. Novel survey methods should be trialled and prospective data gathering (surveillance employed wherever feasible.

  14. How can streamflow and climate-landscape data be used to estimate baseflow mean response time?

    Science.gov (United States)

    Zhang, Runrun; Chen, Xi; Zhang, Zhicai; Soulsby, Chris; Gao, Man

    2018-02-01

    Mean response time (MRT) is a metric describing the propagation of catchment hydraulic behavior that reflects both hydro-climatic conditions and catchment characteristics. To provide a comprehensive understanding of catchment response over a longer-time scale for hydraulic processes, the MRT function for baseflow generation was derived using an instantaneous unit hydrograph (IUH) model that describes the subsurface response to effective rainfall inputs. IUH parameters were estimated based on the "match test" between the autocorrelation function (ACFs) derived from the filtered base flow time series and from the IUH parameters, under the GLUE framework. Regionalization of MRT was conducted using estimates and hydroclimate-landscape indices in 22 sub-basins of the Jinghe River Basin (JRB) in the Loess Plateau of northwest China. Results indicate there is strong equifinality in determination of the best parameter sets but the median values of the MRT estimates are relatively stable in the acceptable range of the parameters. MRTs vary markedly over the studied sub-basins, ranging from tens of days to more than a year. Climate, topography and geomorphology were identified as three first-order controls on recharge-baseflow response processes. Human activities involving the cultivation of permanent crops may elongate the baseflow MRT and hence increase the dynamic storage. Cross validation suggests the model can be used to estimate MRTs in ungauged catchments in similar regions of throughout the Loess Plateau. The proposed method provides a systematic approach for MRT estimation and regionalization in terms of hydroclimate and catchment characteristics, which is helpful in the sustainable water resources utilization and ecological protection in the Loess Plateau.

  15. Nationwide survey of dental radiographic examination and estimation of collective effective dose in Japan, 1999

    International Nuclear Information System (INIS)

    Iwai, Kazuo; Satomi, Chieko; Kawashima, Shoji; Hashimoto, Koji; Nishizawa, Kanae; Maruyama, Takashi

    2005-01-01

    A nationwide survey of dental X-ray examination in Japan was performed in 1999, and the effective exposure dose due to the dental X-ray examination was estimated. In Japan, most dental X-ray equipment are used at a tube voltage of 60 kV and a tube current of 10 mA. Dental film in speed group D is most frequently used for dental X ray examination. Fifty percent or more of dental clinics processed the films automatically. Seventy-five percent of dental clinics performed dental X-ray examinations in a separate X-ray room. The number of dental X-ray examinations in 1999 in Japan was estimated to be 82,301,000 for intra-oral radiography and 12,336,000 for panoramic radiography. The collective effective exposure dose in 1999 was estimated at 905.5 man·Sv, for intra-oral radiography and 128.9 man·Sv for panoramic radiography. (author)

  16. Tracking Psychosocial Health in Adults with Epilepsy—Estimates from the 2010 National Health Interview Survey

    Science.gov (United States)

    Kobau, R; Cui, W; Kadima, N; Zack, MM; Sajatovic, M; Kaiboriboon, K; Jobst, B

    2015-01-01

    Objective This study provides population-based estimates of psychosocial health among U.S. adults with epilepsy from the 2010 National Health Interview Survey. Methods Multinomial logistic regression was used to estimate the prevalence of the following measures of psychosocial health among adults with and those without epilepsy: 1) the Kessler-6 scale of Serious Psychological Distress; 2) cognitive limitation; the extent of impairments associated with psychological problems; and work limitation; 3) Social participation; and 4) the Patient Reported Outcome Measurement Information System Global Health scale. Results Compared with adults without epilepsy, adults with epilepsy, especially those with active epilepsy, reported significantly worse psychological health, more cognitive impairment, difficulty in participating in some social activities, and reduced health-related quality of life (HRQOL). Conclusions These disparities in psychosocial health in U.S. adults with epilepsy serve as baseline national estimates of their HRQOL, consistent with Healthy People 2020 national objectives on HRQOL. PMID:25305435

  17. Survey on the frequency of typical X-Ray examinations and estimation of associated population doses in the Republic of Macedonia

    International Nuclear Information System (INIS)

    Gershan, V.; Stikova, E.

    2013-01-01

    effective doses were estimated using literature data for values of the mean effective dose per typical examination procedure. Finally, normalization of the total collective effective dose from all TOP 20 X-ray procedures for the whole population in the Republic of Macedonia was performed. Results: 67% of X-ray departments present in the Republic of Macedonia at the time the survey was initiated provided data on the number of TOP20 X-ray examination procedures performed in 2010. On the basis of the data gathered, a total of 322039 TOP20 X-ray examination procedures were performed in 2010 for both adult and pediatric patients. Plain radiography examination procedures (dental excluded) were the most commonly performed procedures in the Republic of Macedonia that year and the plain radiography of chest/thorax had the highest frequency of examinations (64 examinations) per 1000 population. The Ba meal examination procedure with an annual frequency of 2.93 per 1000 population has the highest contribution to the annual collective effective dose of all other procedures. Still, in total, the contribution of X-ray examinations in the plain radiography modality to the collective effective dose is the highest. The total collective dose from TOP 20 X-ray examination procedures in 2010 is 507 man Sv, while the normalized collective dose to the population is 249.7 mSv/1000 population.Conclusions: The most common type of examination in the Republic of Macedonia for 2010 is X-ray projection of lungs. The contribution to the collective effective dose from X-ray examinations in the plain radiography modality is the highest, followed by contributions from fluoroscopy procedures, computer tomography and interventional radiology procedures. Comparison of the estimated collective dose from TOP20 X-ray examination procedures in other countries suggests possible underestimation in the estimated doses comparing to actual doses. A more comprehensive survey and analysis are needed to be carried out in

  18. Female genital mutilation/cutting in Italy: an enhanced estimation for first generation migrant women based on 2016 survey data.

    Science.gov (United States)

    Ortensi, Livia Elisa; Farina, Patrizia; Leye, Els

    2018-01-12

    Migration flows of women from Female Genital Mutilation/Cutting practicing countries have generated a need for data on women potentially affected by Female Genital Mutilation/Cutting. This paper presents enhanced estimates for foreign-born women and asylum seekers in Italy in 2016, with the aim of supporting resource planning and policy making, and advancing the methodological debate on estimation methods. The estimates build on the most recent methodological development in Female Genital Mutilation/Cutting direct and indirect estimation for Female Genital Mutilation/Cutting non-practicing countries. Direct estimation of prevalence was performed for 9 communities using the results of the survey FGM-Prev, held in Italy in 2016. Prevalence for communities not involved in the FGM-Prev survey was estimated using to the 'extrapolation-of-FGM/C countries prevalence data method' with corrections according to the selection hypothesis. It is estimated that 60 to 80 thousand foreign-born women aged 15 and over with Female Genital Mutilation/Cutting are present in Italy in 2016. We also estimated the presence of around 11 to 13 thousand cut women aged 15 and over among asylum seekers to Italy in 2014-2016. Due to the long established presence of female migrants from some practicing communities Female Genital Mutilation/Cutting is emerging as an issue also among women aged 60 and over from selected communities. Female Genital Mutilation/Cutting is an additional source of concern for slightly more than 60% of women seeking asylum. Reliable estimates on Female Genital Mutilation/Cutting at country level are important for evidence-based policy making and service planning. This study suggests that indirect estimations cannot fully replace direct estimations, even if corrections for migrant socioeconomic selection can be implemented to reduce the bias.

  19. Nationwide epidemiological survey of early chronic pancreatitis in Japan.

    Science.gov (United States)

    Masamune, Atsushi; Kikuta, Kazuhiro; Nabeshima, Tatsuhide; Nakano, Eriko; Hirota, Morihisa; Kanno, Atsushi; Kume, Kiyoshi; Hamada, Shin; Ito, Tetsuhide; Fujita, Motokazu; Irisawa, Atsushi; Nakashima, Masanori; Hanada, Keiji; Eguchi, Takaaki; Kato, Ryusuke; Inatomi, Osamu; Shirane, Akio; Takeyama, Yoshifumi; Tsuji, Ichiro; Shimosegawa, Tooru

    2017-08-01

    The world's first diagnostic criteria for early CP were proposed in 2009 in Japan. This study aimed to clarify the clinico-epidemiological features of early CP in Japan. Patients with early CP who were diagnosed according to the diagnostic criteria for early CP and had visited the selected hospitals in 2011 were surveyed. The study consisted of two-stage surveys: the number of patients with early CP was estimated by the first questionnaire and their clinical features were assessed by the second questionnaire. The estimated number of early CP patients was 5410 (95% confidence interval 3675-6945), with an overall prevalence of 4.2 per 100,000 persons. The number of patients who were newly diagnosed with early CP was estimated to be 1330 (95% confidence interval 1058-1602), with an annual incidence of 1.0 per 100,000 persons. Detailed clinical information was obtained in 151 patients in the second survey. The male-to-female sex ratio was 1.32:1. The mean age was 60.4 and the mean age at disease onset was 55.4. Idiopathic (47.7%) and alcoholic (45.0%) were the two most common etiologies. Proportions of female and idiopathic cases were higher in early CP than in definite CP. Hyperechoic foci without shadowing and stranding were the most common findings on endoscopic ultrasonography. The clinical profiles of early CP patients who showed lobularity with honeycombing on endoscopic ultrasonography or previous episodes of acute pancreatitis were similar to those of definite CP patients. We clarified the current status of early CP in Japan.

  20. Note on an Identity Between Two Unbiased Variance Estimators for the Grand Mean in a Simple Random Effects Model.

    Science.gov (United States)

    Levin, Bruce; Leu, Cheng-Shiun

    2013-01-01

    We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.

  1. Violence and Drug Use in Rural Teens: National Prevalence Estimates from the 2003 Youth Risk Behavior Survey

    Science.gov (United States)

    Johnson, Andrew O.; Mink, Michael D.; Harun, Nusrat; Moore, Charity G.; Martin, Amy B.; Bennett, Kevin J.

    2008-01-01

    Objectives: The purpose of this study was to compare national estimates of drug use and exposure to violence between rural and urban teens. Methods: Twenty-eight dependent variables from the 2003 Youth Risk Behavior Survey were used to compare violent activities, victimization, suicidal behavior, tobacco use, alcohol use, and illegal drug use…

  2. Economic Impact of Childhood Psychiatric Disorder on Public Sector Services in Britain: Estimates from National Survey Data

    Science.gov (United States)

    Snell, Tom; Knapp, Martin; Healey, Andrew; Guglani, Sacha; Evans-Lacko, Sara; Fernandez, Jose-Luis; Meltzer, Howard; Ford, Tamsin

    2013-01-01

    Background: Approximately one in ten children aged 5-15 in Britain has a conduct, hyperactivity or emotional disorder. Methods: The British Child and Adolescent Mental Health Surveys (BCAMHS) identified children aged 5-15 with a psychiatric disorder, and their use of health, education and social care services. Service costs were estimated for each…

  3. Chapter 12: Survey Design and Implementation for Estimating Gross Savings Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Baumgartner, Robert [Tetra Tech, Madison, WI (United States)

    2017-10-05

    This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savings from energy efficiency programs.

  4. SURVEY ON ESTIMATING QALYS IN THE WESTERN REGION OF ROMANIA – THE CASE WITHOUT INTERVENTION

    OpenAIRE

    MARIUS IOAN PANTEA; DELIA GLIGOR

    2012-01-01

    Currently, assessing a population’s quality of life is considered one of the most important aspects in health interventions’ evaluation across most of the European countries. However, in Romania its utility is unfortunately overlooked. In this context, the paper aims at providing an accurate estimate of QALYs for healthcare investment projects, determining through a questionnaire survey the utilities associated to quality of life for five critical medical conditions and thus calculating the r...

  5. National indoor radon survey in Filipino homes

    International Nuclear Information System (INIS)

    Dela Cruz, Fe M.; Garcia, Teofilo Y.; Palad, Lorna Jean H.; Cobar, Ma. Lucia C.; Duran, Emerenciana B.

    2012-01-01

    This paper presents the results of the first national survey of indoor radon concentrations in different types of Filipino houses throughout the Philippines. Measurements were carried out using 2,626 CR-39 alpha track detectors that were deployed in selected houses for a period of six months. Results of analyses showed that indoor radon concentration in Filipino houses ranged from 1.4 to 57.6 Bq/m 3 with a mean value of 21.4 ± 9.2 Bq/m 3 . This leads to an estimated annual average effective dose equivalent of 0.4 mSv. There are slight differences in the mean concentrations of radon in different types of houses, which ranged from 19.4 to 25.3 Bq/m 3 . Highest mean radon concentrations were observed in houses made of concrete with a mean radon value of 25.3 ± 10.1 Bq/m 3 . Radon concentrations in the houses surveyed were below the action limits of 200 Bq/m 3 set by the National Radiological Protection Board (NRPB) and do not pose any hazard to the health of the occupants. (author)

  6. Estimating mortality from external causes using data from retrospective surveys: A validation study in Niakhar (Senegal

    Directory of Open Access Journals (Sweden)

    Gilles Pison

    2018-03-01

    Full Text Available Background: In low- and middle-income countries (LMICs, data on causes of death is often inaccurate or incomplete. In this paper, we test whether adding a few questions about injuries and accidents to mortality questionnaires used in representative household surveys would yield accurate estimates of the extent of mortality due to external causes (accidents, homicides, or suicides. Methods: We conduct a validation study in Niakhar (Senegal, during which we compare reported survey data to high-quality prospective records of deaths collected by a health and demographic surveillance system (HDSS. Results: Survey respondents more frequently list the deaths of their adult siblings who die of external causes than the deaths of those who die from other causes. The specificity of survey data is high, but sensitivity is low. Among reported deaths, less than 60Š of the deaths classified as due to external causes by the HDSS are also classified as such by survey respondents. Survey respondents better report deaths due to road-traffic accidents than deaths from suicides and homicides. Conclusions: Asking questions about deaths resulting from injuries and accidents during surveys might help measure mortality from external causes in LMICs, but the resulting data displays systematic bias in a rural population of Senegal. Future studies should 1 investigate whether similar biases also apply in other settings and 2 test new methods to further improve the accuracy of survey data on mortality from external causes. Contribution: This study helps strengthen the monitoring of sustainable development targets in LMICs by validating a simple approach for the measurement of mortality from external causes.

  7. Multilevel model to estimate county-level untreated dental caries among US children aged 6-9years using the National Health and Nutrition Examination Survey.

    Science.gov (United States)

    Lin, Mei; Zhang, Xingyou; Holt, James B; Robison, Valerie; Li, Chien-Hsun; Griffin, Susan O

    2018-06-01

    Because conducting population-based oral health screening is resource intensive, oral health data at small-area levels (e.g., county-level) are not commonly available. We applied the multilevel logistic regression and poststratification method to estimate county-level prevalence of untreated dental caries among children aged 6-9years in the United States using data from the National Health and Nutrition Examination Survey (NHANES) 2005-2010 linked with various area-level data at census tract, county and state levels. We validated model-based national estimates against direct estimates from NHANES. We also compared model-based estimates with direct estimates from select State Oral Health Surveys (SOHS) at state and county levels. The model with individual-level covariates only and the model with individual-, census tract- and county-level covariates explained 7.2% and 96.3% respectively of overall county-level variation in untreated caries. Model-based county-level prevalence estimates ranged from 4.9% to 65.2% with median of 22.1%. The model-based national estimate (19.9%) matched the NHANES direct estimate (19.8%). We found significantly positive correlations between model-based estimates for 8-year-olds and direct estimates from the third-grade State Oral Health Surveys (SOHS) at state level for 34 states (Pearson coefficient: 0.54, P=0.001) and SOHS estimates at county level for 53 New York counties (Pearson coefficient: 0.38, P=0.006). This methodology could be a useful tool to characterize county-level disparities in untreated dental caries among children aged 6-9years and complement oral health surveillance to inform public health programs especially when local-level data are not available although the lack of external validation due to data unavailability should be acknowledged. Published by Elsevier Inc.

  8. Comparing Two Inferential Approaches to Handling Measurement Error in Mixed-Mode Surveys

    Directory of Open Access Journals (Sweden)

    Buelens Bart

    2017-06-01

    Full Text Available Nowadays sample survey data collection strategies combine web, telephone, face-to-face, or other modes of interviewing in a sequential fashion. Measurement bias of survey estimates of means and totals are composed of different mode-dependent measurement errors as each data collection mode has its own associated measurement error. This article contains an appraisal of two recently proposed methods of inference in this setting. The first is a calibration adjustment to the survey weights so as to balance the survey response to a prespecified distribution of the respondents over the modes. The second is a prediction method that seeks to correct measurements towards a benchmark mode. The two methods are motivated differently but at the same time coincide in some circumstances and agree in terms of required assumptions. The methods are applied to the Labour Force Survey in the Netherlands and are found to provide almost identical estimates of the number of unemployed. Each method has its own specific merits. Both can be applied easily in practice as they do not require additional data collection beyond the regular sequential mixed-mode survey, an attractive element for national statistical institutes and other survey organisations.

  9. Mean inactivation dose (D)

    International Nuclear Information System (INIS)

    Vijayakumar, S.; Ng, T.C.; Raudkivi, U.; Meaney, T.J.

    1990-01-01

    By predicting treatment outcome to radiotherapy from in vitro radiobiological parameters, not only individual patient treatments can be tailored, but also new promising treatment protocols can be tried in patients in whom unfavorable outcome is predicted. In this respect, choosing the right parameter can be very important. Unlike D 0 and N which provide information of the distal part of the survival curve, mean inactivation dose (D) estimates overall radiosensitivity. However, the parameters reflecting the response at the clinically relevant low-dose region are neglected in the literature. In a literature survey of 98 papers in which survival curves or D 0 /N were used, only in 2 D was used. In 21 papers the D 0 /n values were important in drawing conclusions. By calculating D in 3 of these 21 papers, we show that the conclusion drawn may be altered with the use of D. The importance of ''low-dose-region-parameters'' is reviewed. (orig.)

  10. Preventing land loss in coastal Louisiana: estimates of WTP and WTA.

    Science.gov (United States)

    Petrolia, Daniel R; Kim, Tae-Goun

    2011-03-01

    A dichotomous-choice contingent-valuation survey was conducted in the State of Louisiana (USA) to estimate compensating surplus (CS) and equivalent surplus (ES) welfare measures for the prevention of future coastal wetland losses in Louisiana. Valuations were elicited using both willingness to pay (WTP) and willingness to accept compensation (WTA) payment vehicles. Mean CS (WTP) estimates based on a probit model using a Box-Cox specification on income was $825 per household annually, and mean ES (WTA) was estimated at $4444 per household annually. Regression results indicate that the major factors influencing support for land-loss prevention were income (positive, WTP model only), perceived hurricane protection benefits (positive), environmental and recreation protection (positive), distrust of government (negative), age (positive, WTA model only), and race (positive for whites). Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. How Radiation Oncologists Evaluate and Incorporate Life Expectancy Estimates Into the Treatment of Palliative Cancer Patients: A Survey-Based Study

    International Nuclear Information System (INIS)

    Tseng, Yolanda D.; Krishnan, Monica S.; Sullivan, Adam J.; Jones, Joshua A.; Chow, Edward; Balboni, Tracy A.

    2013-01-01

    Purpose: We surveyed how radiation oncologists think about and incorporate a palliative cancer patient’s life expectancy (LE) into their treatment recommendations. Methods and Materials: A 41-item survey was e-mailed to 113 radiation oncology attending physicians and residents at radiation oncology centers within the Boston area. Physicians estimated how frequently they assessed the LE of their palliative cancer patients and rated the importance of 18 factors in formulating LE estimates. For 3 common palliative case scenarios, physicians estimated LE and reported whether they had an LE threshold below which they would modify their treatment recommendation. LE estimates were considered accurate when within the 95% confidence interval of median survival estimates from an established prognostic model. Results: Among 92 respondents (81%), the majority were male (62%), from an academic practice (75%), and an attending physician (70%). Physicians reported assessing LE in 91% of their evaluations and most frequently rated performance status (92%), overall metastatic burden (90%), presence of central nervous system metastases (75%), and primary cancer site (73%) as “very important” in assessing LE. Across the 3 cases, most (88%-97%) had LE thresholds that would alter treatment recommendations. Overall, physicians’ LE estimates were 22% accurate with 67% over the range predicted by the prognostic model. Conclusions: Physicians often incorporate LE estimates into palliative cancer care and identify important prognostic factors. Most have LE thresholds that guide their treatment recommendations. However, physicians overestimated patient survival times in most cases. Future studies focused on improving LE assessment are needed

  12. How Radiation Oncologists Evaluate and Incorporate Life Expectancy Estimates Into the Treatment of Palliative Cancer Patients: A Survey-Based Study

    Energy Technology Data Exchange (ETDEWEB)

    Tseng, Yolanda D., E-mail: ydtseng@partners.org [Harvard Radiation Oncology Program, Boston, Massachusetts (United States); Krishnan, Monica S. [Harvard Radiation Oncology Program, Boston, Massachusetts (United States); Sullivan, Adam J. [Department of Biostatistics, Harvard University, Cambridge, Massachusetts (United States); Jones, Joshua A. [Harvard Palliative Medicine Fellowship Program, Boston, Massachusetts (United States); Chow, Edward [Department of Radiation Oncology, University of Toronto, Toronto (Canada); Balboni, Tracy A. [Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women' s Hospital, Boston, Massachusetts (United States)

    2013-11-01

    Purpose: We surveyed how radiation oncologists think about and incorporate a palliative cancer patient’s life expectancy (LE) into their treatment recommendations. Methods and Materials: A 41-item survey was e-mailed to 113 radiation oncology attending physicians and residents at radiation oncology centers within the Boston area. Physicians estimated how frequently they assessed the LE of their palliative cancer patients and rated the importance of 18 factors in formulating LE estimates. For 3 common palliative case scenarios, physicians estimated LE and reported whether they had an LE threshold below which they would modify their treatment recommendation. LE estimates were considered accurate when within the 95% confidence interval of median survival estimates from an established prognostic model. Results: Among 92 respondents (81%), the majority were male (62%), from an academic practice (75%), and an attending physician (70%). Physicians reported assessing LE in 91% of their evaluations and most frequently rated performance status (92%), overall metastatic burden (90%), presence of central nervous system metastases (75%), and primary cancer site (73%) as “very important” in assessing LE. Across the 3 cases, most (88%-97%) had LE thresholds that would alter treatment recommendations. Overall, physicians’ LE estimates were 22% accurate with 67% over the range predicted by the prognostic model. Conclusions: Physicians often incorporate LE estimates into palliative cancer care and identify important prognostic factors. Most have LE thresholds that guide their treatment recommendations. However, physicians overestimated patient survival times in most cases. Future studies focused on improving LE assessment are needed.

  13. Beyond the mean estimate: a quantile regression analysis of inequalities in educational outcomes using INVALSI survey data

    Directory of Open Access Journals (Sweden)

    Antonella Costanzo

    2017-09-01

    Full Text Available Abstract The number of studies addressing issues of inequality in educational outcomes using cognitive achievement tests and variables from large-scale assessment data has increased. Here the value of using a quantile regression approach is compared with a classical regression analysis approach to study the relationships between educational outcomes and likely predictor variables. Italian primary school data from INVALSI large-scale assessments were analyzed using both quantile and standard regression approaches. Mathematics and reading scores were regressed on students' characteristics and geographical variables selected for their theoretical and policy relevance. The results demonstrated that, in Italy, the role of gender and immigrant status varied across the entire conditional distribution of students’ performance. Analogous results emerged pertaining to the difference in students’ performance across Italian geographic areas. These findings suggest that quantile regression analysis is a useful tool to explore the determinants and mechanisms of inequality in educational outcomes. A proper interpretation of quantile estimates may enable teachers to identify effective learning activities and help policymakers to develop tailored programs that increase equity in education.

  14. Least mean square fourth based microgrid state estimation algorithm using the internet of things technology.

    Science.gov (United States)

    Rana, Md Masud

    2017-01-01

    This paper proposes an innovative internet of things (IoT) based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations.

  15. Estimation of Parameters in Mean-Reverting Stochastic Systems

    Directory of Open Access Journals (Sweden)

    Tianhai Tian

    2014-01-01

    Full Text Available Stochastic differential equation (SDE is a very important mathematical tool to describe complex systems in which noise plays an important role. SDE models have been widely used to study the dynamic properties of various nonlinear systems in biology, engineering, finance, and economics, as well as physical sciences. Since a SDE can generate unlimited numbers of trajectories, it is difficult to estimate model parameters based on experimental observations which may represent only one trajectory of the stochastic model. Although substantial research efforts have been made to develop effective methods, it is still a challenge to infer unknown parameters in SDE models from observations that may have large variations. Using an interest rate model as a test problem, in this work we use the Bayesian inference and Markov Chain Monte Carlo method to estimate unknown parameters in SDE models.

  16. Testing survey-based methods for rapid monitoring of child mortality, with implications for summary birth history data.

    Science.gov (United States)

    Brady, Eoghan; Hill, Kenneth

    2017-01-01

    Under-five mortality estimates are increasingly used in low and middle income countries to target interventions and measure performance against global development goals. Two new methods to rapidly estimate under-5 mortality based on Summary Birth Histories (SBH) were described in a previous paper and tested with data available. This analysis tests the methods using data appropriate to each method from 5 countries that lack vital registration systems. SBH data are collected across many countries through censuses and surveys, and indirect methods often rely upon their quality to estimate mortality rates. The Birth History Imputation method imputes data from a recent Full Birth History (FBH) onto the birth, death and age distribution of the SBH to produce estimates based on the resulting distribution of child mortality. DHS FBHs and MICS SBHs are used for all five countries. In the implementation, 43 of 70 estimates are within 20% of validation estimates (61%). Mean Absolute Relative Error is 17.7.%. 1 of 7 countries produces acceptable estimates. The Cohort Change method considers the differences in births and deaths between repeated Summary Birth Histories at 1 or 2-year intervals to estimate the mortality rate in that period. SBHs are taken from Brazil's PNAD Surveys 2004-2011 and validated against IGME estimates. 2 of 10 estimates are within 10% of validation estimates. Mean absolute relative error is greater than 100%. Appropriate testing of these new methods demonstrates that they do not produce sufficiently good estimates based on the data available. We conclude this is due to the poor quality of most SBH data included in the study. This has wider implications for the next round of censuses and future household surveys across many low- and middle- income countries.

  17. Simultaneous estimation of the in-mean and in-variance causal connectomes of the human brain.

    Science.gov (United States)

    Duggento, A; Passamonti, L; Guerrisi, M; Toschi, N

    2017-07-01

    In recent years, the study of the human connectome (i.e. of statistical relationships between non spatially contiguous neurophysiological events in the human brain) has been enormously fuelled by technological advances in high-field functional magnetic resonance imaging (fMRI) as well as by coordinated world wide data-collection efforts like the Human Connectome Project (HCP). In this context, Granger Causality (GC) approaches have recently been employed to incorporate information about the directionality of the influence exerted by a brain region on another. However, while fluctuations in the Blood Oxygenation Level Dependent (BOLD) signal at rest also contain important information about the physiological processes that underlie neurovascular coupling and associations between disjoint brain regions, so far all connectivity estimation frameworks have focused on central tendencies, hence completely disregarding so-called in-variance causality (i.e. the directed influence of the volatility of one signal on the volatility of another). In this paper, we develop a framework for simultaneous estimation of both in-mean and in-variance causality in complex networks. We validate our approach using synthetic data from complex ensembles of coupled nonlinear oscillators, and successively employ HCP data to provide the very first estimate of the in-variance connectome of the human brain.

  18. Estimating the Distribution of Dietary Consumption Patterns

    KAUST Repository

    Carroll, Raymond J.

    2014-02-01

    In the United States the preferred method of obtaining dietary intake data is the 24-hour dietary recall, yet the measure of most interest is usual or long-term average daily intake, which is impossible to measure. Thus, usual dietary intake is assessed with considerable measurement error. We were interested in estimating the population distribution of the Healthy Eating Index-2005 (HEI-2005), a multi-component dietary quality index involving ratios of interrelated dietary components to energy, among children aged 2-8 in the United States, using a national survey and incorporating survey weights. We developed a highly nonlinear, multivariate zero-inflated data model with measurement error to address this question. Standard nonlinear mixed model software such as SAS NLMIXED cannot handle this problem. We found that taking a Bayesian approach, and using MCMC, resolved the computational issues and doing so enabled us to provide a realistic distribution estimate for the HEI-2005 total score. While our computation and thinking in solving this problem was Bayesian, we relied on the well-known close relationship between Bayesian posterior means and maximum likelihood, the latter not computationally feasible, and thus were able to develop standard errors using balanced repeated replication, a survey-sampling approach.

  19. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  20. National survey of indoor radon levels in Croatia

    International Nuclear Information System (INIS)

    Radolic, V.; Vukovic, B.; Stanic, D.; Katic, M.; Faj, Z.; Lukacevic, I.; Planinic, J.; Suveljak, B.; Faj, D.; Lukic, M.

    2006-01-01

    National survey of indoor radon was performed by a random sampling of thousand (782 realized) dwellings in Croatia. Radon concentrations were measured for one year with LR-115 SSNT detectors and arithmetic and geometric means of 68 and 50 Bq/m 3 were obtained, respectively. The arithmetic means of radon concentrations on 20 counties were from 33 to 198 Bq/m 3 . The percentage of dwellings with radon concentrations above 200 and 400 Bq/m 3 was 5.4% and 1.8%, respectively. The average annual effective dose of the indoor radon was estimated as 2.2 mSv. (author)

  1. ROV advanced magnetic survey for revealing archaeological targets and estimating medium magnetization

    Science.gov (United States)

    Eppelbaum, Lev

    2013-04-01

    Magnetic survey is one of most applied geophysical method for searching and localization of any objects with contrast magnetic properties (for instance, in Israel detailed magneric survey has been succesfully applied at more than 60 archaeological sites (Eppelbaum, 2010, 2011; Eppelbaum et al., 2011, 2010)). However, land magnetic survey at comparatively large archaeological sites (with observation grids 0.5 x 0.5 or 1 x 1 m) may occupy 5-10 days. At the same time the new Remote Operation Vehicle (ROV) generation - small and maneuvering vehicles - can fly at levels of few (and even one) meters over the earth's surface (flowing the relief forms or straight). Such ROV with precise magnetic field measurements (with a frequency of 20-25 observations per second) may be performed during 10-30 minutes, moreover at different levels over the earth's surface. Such geophysical investigations should have an extremely low exploitation cost. Finally, measurements of geophysical fields at different observation levels could provide new unique geophysical-archaeological information (Eppelbaum, 2005; Eppelbaum and Mishne, 2011). The developed interpretation methodology for magnetic anomalies advanced analysis (Khesin et al., 1996; Eppelbaum et al., 2001; Eppelbaum et al., 2011) may be successfully applied for ROV magnetic survey for delineation of archaeological objects and estimation averaged magnetization of geological medium. This methodology includes: (1) non-conventional procedure for elimination of secondary effect of magnetic temporary variations, (2) calculation of rugged relief influence by the use of a correlation method, (3) estimation of medium magnetization, (4) application of various informational and wavelet algorithms for revealing low anomalous effects against the strong noise background, (5) advanced procedures for magnetic anomalies quantitative analysis (they are applicable in conditions of rugged relief, inclined magnetization, and an unknown level of the total

  2. Estimation of population dose and risk to holding assistants from veterinary X-ray examination in Japan

    International Nuclear Information System (INIS)

    Hashizume, Tadashi; Suganuma, Tunenori; Shida, Takuo

    1989-01-01

    For the estimation of the population doses and risks of stochastic effects to assistants who hold animals during veterinary X-ray examination, a random survey of hospitals and clinics was carried out concerning age distribution of such assistants by groups of facilities. The average organ and tissue dose per examination was evaluated from the experimental data using mean technical factors such as X-ray tube voltage, tube current and field size based on the results of a nationwide survey. The population doses to the assistants were calculated to be about 14 nSv per person per year for the genetically significant dose, 3.5 nSv per person per year for per caput mean marrow dose, 3.3 nSv for the leukemia significant dose and 4.5 nSv for the malignant significant dose, respectively. The total risk of stochastic effects to the Japanese population from holding assistants was estimated using population data and it was estimated to be less than one person per year, but the cancer risks to a number of the assistants were estimated to be more than 4 x 10 -5 . (author)

  3. Estimating solar ultraviolet irradiance (290-385 nm by means of the spectral parametric models: SPCTRAL2 and SMARTS2

    Directory of Open Access Journals (Sweden)

    I. Foyo-Moreno

    2000-11-01

    Full Text Available Since the discovery of the ozone depletion in Antarctic and the globally declining trend of stratospheric ozone concentration, public and scientific concern has been raised in the last decades. A very important consequence of this fact is the increased broadband and spectral UV radiation in the environment and the biological effects and heath risks that may take place in the near future. The absence of widespread measurements of this radiometric flux has lead to the development and use of alternative estimation procedures such as the parametric approaches. Parametric models compute the radiant energy using available atmospheric parameters. Some parametric models compute the global solar irradiance at surface level by addition of its direct beam and diffuse components. In the present work, we have developed a comparison between two cloudless sky parametrization schemes. Both methods provide an estimation of the solar spectral irradiance that can be integrated spectrally within the limits of interest. For this test we have used data recorded in a radiometric station located at Granada (37.180°N, 3.580°W, 660 m a.m.s.l., an inland location. The database includes hourly values of the relevant variables covering the years 1994-95. The performance of the models has been tested in relation to their predictive capability of global solar irradiance in the UV range (290–385 nm. After our study, it appears that information concerning the aerosol radiative effects is fundamental in order to obtain a good estimation. The original version of SPCTRAL2 provides estimates of the experimental values with negligible mean bias deviation. This suggests not only the appropriateness of the model but also the convenience of the aerosol features fixed in it to Granada conditions. SMARTS2 model offers increased flexibility concerning the selection of different aerosol models included in the code and provides the best results when the selected models are those

  4. Estimating solar ultraviolet irradiance (290-385 nm by means of the spectral parametric models: SPCTRAL2 and SMARTS2

    Directory of Open Access Journals (Sweden)

    I. Foyo-Moreno

    Full Text Available Since the discovery of the ozone depletion in Antarctic and the globally declining trend of stratospheric ozone concentration, public and scientific concern has been raised in the last decades. A very important consequence of this fact is the increased broadband and spectral UV radiation in the environment and the biological effects and heath risks that may take place in the near future. The absence of widespread measurements of this radiometric flux has lead to the development and use of alternative estimation procedures such as the parametric approaches. Parametric models compute the radiant energy using available atmospheric parameters. Some parametric models compute the global solar irradiance at surface level by addition of its direct beam and diffuse components. In the present work, we have developed a comparison between two cloudless sky parametrization schemes. Both methods provide an estimation of the solar spectral irradiance that can be integrated spectrally within the limits of interest. For this test we have used data recorded in a radiometric station located at Granada (37.180°N, 3.580°W, 660 m a.m.s.l., an inland location. The database includes hourly values of the relevant variables covering the years 1994-95. The performance of the models has been tested in relation to their predictive capability of global solar irradiance in the UV range (290–385 nm. After our study, it appears that information concerning the aerosol radiative effects is fundamental in order to obtain a good estimation. The original version of SPCTRAL2 provides estimates of the experimental values with negligible mean bias deviation. This suggests not only the appropriateness of the model but also the convenience of the aerosol features fixed in it to Granada conditions. SMARTS2 model offers increased flexibility concerning the selection of different aerosol models included in the code and provides the best results when the selected models are those

  5. Computing the Deflection of the Vertical for Improving Aerial Surveys: A Comparison between EGM2008 and ITALGEO05 Estimates

    Directory of Open Access Journals (Sweden)

    Riccardo Barzaghi

    2016-07-01

    Full Text Available Recent studies on the influence of the anomalous gravity field in GNSS/INS applications have shown that neglecting the impact of the deflection of vertical in aerial surveys induces horizontal and vertical errors in the measurement of an object that is part of the observed scene; these errors can vary from a few tens of centimetres to over one meter. The works reported in the literature refer to vertical deflection values based on global geopotential model estimates. In this paper we compared this approach with the one based on local gravity data and collocation methods. In particular, denoted by ξ and η, the two mutually-perpendicular components of the deflection of the vertical vector (in the north and east directions, respectively, their values were computed by collocation in the framework of the Remove-Compute-Restore technique, applied to the gravity database used for estimating the ITALGEO05 geoid. Following this approach, these values have been computed at different altitudes that are relevant in aerial surveys. The (ξ, η values were then also estimated using the high degree EGM2008 global geopotential model and compared with those obtained in the previous computation. The analysis of the differences between the two estimates has shown that the (ξ, η global geopotential model estimate can be reliably used in aerial navigation applications that require the use of sensors connected to a GNSS/INS system only above a given height (e.g., 3000 m in this paper that must be defined by simulations.

  6. Republic of Georgia estimates for prevalence of drug use: Randomized response techniques suggest under-estimation.

    Science.gov (United States)

    Kirtadze, Irma; Otiashvili, David; Tabatadze, Mzia; Vardanashvili, Irina; Sturua, Lela; Zabransky, Tomas; Anthony, James C

    2018-06-01

    Validity of responses in surveys is an important research concern, especially in emerging market economies where surveys in the general population are a novelty, and the level of social control is traditionally higher. The Randomized Response Technique (RRT) can be used as a check on response validity when the study aim is to estimate population prevalence of drug experiences and other socially sensitive and/or illegal behaviors. To apply RRT and to study potential under-reporting of drug use in a nation-scale, population-based general population survey of alcohol and other drug use. For this first-ever household survey on addictive substances for the Country of Georgia, we used the multi-stage probability sampling of 18-to-64-year-old household residents of 111 urban and 49 rural areas. During the interviewer-administered assessments, RRT involved pairing of sensitive and non-sensitive questions about drug experiences. Based upon the standard household self-report survey estimate, an estimated 17.3% [95% confidence interval, CI: 15.5%, 19.1%] of Georgian household residents have tried cannabis. The corresponding RRT estimate was 29.9% [95% CI: 24.9%, 34.9%]. The RRT estimates for other drugs such as heroin also were larger than the standard self-report estimates. We remain unsure about what is the "true" value for prevalence of using illegal psychotropic drugs in the Republic of Georgia study population. Our RRT results suggest that standard non-RRT approaches might produce 'under-estimates' or at best, highly conservative, lower-end estimates. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Evaluation of alternative age-based methods for estimating relative abundance from survey data in relation to assessment models

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders; Kristensen, Kasper

    2014-01-01

    Indices of abundance from fishery-independent trawl surveys constitute an important source of information for many fish stock assessments. Indices are often calculated using area stratified sample means on age-disaggregated data, and finally treated in stock assessment models as independent...... observations. We evaluate a series of alternative methods for calculating indices of abundance from trawl survey data (delta-lognormal, delta-gamma, and Tweedie using Generalized Additive Models) as well as different error structures for these indices when used as input in an age-based stock assessment model...... the different indices produced. The stratified mean method is found much more imprecise than the alternatives based on GAMs, which are found to be similar. Having time-varying index variances is found to be of minor importance, whereas the independence assumption is not only violated but has significant impact...

  8. SEDS: THE SPITZER EXTENDED DEEP SURVEY. SURVEY DESIGN, PHOTOMETRY, AND DEEP IRAC SOURCE COUNTS

    International Nuclear Information System (INIS)

    Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Huang, J.-S.; Hernquist, L.; Hora, J. L.; Arendt, R.; Barmby, P.; Barro, G.; Faber, S.; Guhathakurta, P.; Bell, E. F.; Bouwens, R.; Cattaneo, A.; Croton, D.; Davé, R.; Dunlop, J. S.; Egami, E.; Finlator, K.; Grogin, N. A.

    2013-01-01

    The Spitzer Extended Deep Survey (SEDS) is a very deep infrared survey within five well-known extragalactic science fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the Hubble Deep Field North, and the Extended Groth Strip. SEDS covers a total area of 1.46 deg 2 to a depth of 26 AB mag (3σ) in both of the warm Infrared Array Camera (IRAC) bands at 3.6 and 4.5 μm. Because of its uniform depth of coverage in so many widely-separated fields, SEDS is subject to roughly 25% smaller errors due to cosmic variance than a single-field survey of the same size. SEDS was designed to detect and characterize galaxies from intermediate to high redshifts (z = 2-7) with a built-in means of assessing the impact of cosmic variance on the individual fields. Because the full SEDS depth was accumulated in at least three separate visits to each field, typically with six-month intervals between visits, SEDS also furnishes an opportunity to assess the infrared variability of faint objects. This paper describes the SEDS survey design, processing, and publicly-available data products. Deep IRAC counts for the more than 300,000 galaxies detected by SEDS are consistent with models based on known galaxy populations. Discrete IRAC sources contribute 5.6 ± 1.0 and 4.4 ± 0.8 nW m –2 sr –1 at 3.6 and 4.5 μm to the diffuse cosmic infrared background (CIB). IRAC sources cannot contribute more than half of the total CIB flux estimated from DIRBE data. Barring an unexpected error in the DIRBE flux estimates, half the CIB flux must therefore come from a diffuse component.

  9. An Estimate of Recoverable Heavy Oil Resources of the Orinoco Oil Belt, Venezuela

    Science.gov (United States)

    Schenk, Christopher J.; Cook, Troy A.; Charpentier, Ronald R.; Pollastro, Richard M.; Klett, Timothy R.; Tennyson, Marilyn E.; Kirschbaum, Mark A.; Brownfield, Michael E.; Pitman, Janet K.

    2009-01-01

    The Orinoco Oil Belt Assessment Unit of the La Luna-Quercual Total Petroleum System encompasses approximately 50,000 km2 of the East Venezuela Basin Province that is underlain by more than 1 trillion barrels of heavy oil-in-place. As part of a program directed at estimating the technically recoverable oil and gas resources of priority petroleum basins worldwide, the U.S. Geological Survey estimated the recoverable oil resources of the Orinoco Oil Belt Assessment Unit. This estimate relied mainly on published geologic and engineering data for reservoirs (net oil-saturated sandstone thickness and extent), petrophysical properties (porosity, water saturation, and formation volume factors), recovery factors determined by pilot projects, and estimates of volumes of oil-in-place. The U.S. Geological Survey estimated a mean volume of 513 billion barrels of technically recoverable heavy oil in the Orinoco Oil Belt Assessment Unit of the East Venezuela Basin Province; the range is 380 to 652 billion barrels. The Orinoco Oil Belt Assessment Unit thus contains one of the largest recoverable oil accumulations in the world.

  10. Meaning in life in the Federal Republic of Germany: results of a representative survey with the Schedule for Meaning in Life Evaluation (SMiLE

    Directory of Open Access Journals (Sweden)

    Bausewein Claudia

    2007-11-01

    Full Text Available Abstract Background The construct "meaning-in-life" (MiL has recently raised the interest of clinicians working in psycho-oncology and end-of-life care and has become a topic of scientific investigation. Difficulties regarding the measurement of MiL are related to the various theoretical and conceptual approaches and its inter-individual variability. Therefore the "Schedule for Meaning in Life Evaluation" (SMiLE, an individualized instrument for the assessment of MiL, was developed. The aim of this study was to evaluate MiL in a representative sample of the German population. Methods In the SMiLE, the respondents first indicate a minimum of three and maximum of seven areas which provide meaning to their life before rating their current level of importance and satisfaction of each area. Indices of total weighting (IoW, range 20–100, total satisfaction (IoS, range 0–100, and total weighted satisfaction (IoWS, range 0–100 are calculated. Results In July 2005, 1,004 Germans were randomly selected and interviewed (inclusion rate, 85.3%. 3,521 areas of MiL were listed and assigned to 13 a-posteriori categories. The mean IoS was 81.9 ± 15.1, the mean IoW was 84.6 ± 11.9, and the mean IoWS was 82.9 ± 14.8. In youth (16–19 y/o, "friends" were most important for MiL, in young adulthood (20–29 y/o "partnership", in middle adulthood (30–39 y/o "work", during retirement (60–69 y/o "health" and "altruism", and in advanced age (70 y/o and more "spirituality/religion" and "nature experience/animals". Conclusion This study is a first nationwide survey on individual MiL in a randomly selected, representative sample. The MiL areas of the age stages seem to correspond with Erikson's stages of psychosocial development.

  11. CO/sub 2/ emission and agricultural productivity in southeast asian region: a pooled mean group estimation

    International Nuclear Information System (INIS)

    Islam, M.; Kazi, M.

    2014-01-01

    Frequent natural calamities, extreme climatic events and unexpected seasonal changes are the obvious examples of global warming. Carbon emissions by industrial units all over the world are believed to be the major contributor of the global warming that can lead to reduced agricultural productivity. This paper examines the impact of CO emission on agricultural productivity in Southeast Asian countries. It investigates the dynamic relationship between CO emission (along with other control-variables) and agricultural output using panel data set comprising data from Southeast Asian countries. Following the dynamic heterogeneous panel techniques developed by Pesaran and Shin (1999) for estimating the short- run and long-run effects using autoregressive distributed lag (ARDL) model in the error correction form, the study then estimated the empirical model based on pooled mean group (PMG) estimator. The study found that increased CO emission resulted in higher agricultural productivity because of the fact that farmers around the globe quickly adapt to climate change. In addition, use of submersible pump and other capital machineries significantly increased agricultural yield and led to reduced dependency on human capital, while use of chemical fertilizers increased productivity in short-run but had a harmful impact in the long-run. (author)

  12. Estimation of mean tree stand volume using high-resolution aerial RGB imagery and digital surface model, obtained from sUAV and Trestima mobile application

    Directory of Open Access Journals (Sweden)

    G. K. Rybakov

    2017-06-01

    Full Text Available This study considers a remote sensing technique for mean volume estimation based on a very high-resolution (VHR aerial RGB imagery obtained using a small-sized unmanned aerial vehicle (sUAV and a high-resolution photogrammetric digital surface model (DSM as well as an innovative technology for field measurements (Trestima. The study area covers approx. 220 ha of forestland in Finland. The work concerns the entire process from remote sensing and field data acquisition to statistical analysis and forest volume wall-to-wall mapping. The study showed that the VHR aerial imagery and the high-resolution DSM produced based on the application of the sUAV have good prospects for forest inventory. For the sUAV based estimation of forest variables such as Height, Basal Area and mean Volume, Root Mean Square Error constituted 6.6 %, 22.6 % and 26.7 %, respectively. Application of Trestima for estimation of the mean volume of the standing forest showed minor difference over the existing Forest Management Plan at all the selected forest compartments. Simultaneously, the results of the study confirmed that the technologies and the tools applied at this work could be a reliable and potentially cost-effective means of forest data acquisition with high potential of operational use.

  13. Neutron flux calculation by means of Monte Carlo methods

    International Nuclear Information System (INIS)

    Barz, H.U.; Eichhorn, M.

    1988-01-01

    In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)

  14. Least mean square fourth based microgrid state estimation algorithm using the internet of things technology.

    Directory of Open Access Journals (Sweden)

    Md Masud Rana

    Full Text Available This paper proposes an innovative internet of things (IoT based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations.

  15. Methods for estimating private forest ownership statistics: revised methods for the USDA Forest Service's National Woodland Owner Survey

    Science.gov (United States)

    Brenton J. ​Dickinson; Brett J. Butler

    2013-01-01

    The USDA Forest Service's National Woodland Owner Survey (NWOS) is conducted to better understand the attitudes and behaviors of private forest ownerships, which control more than half of US forestland. Inferences about the populations of interest should be based on theoretically sound estimation procedures. A recent review of the procedures disclosed an error in...

  16. A random cluster survey and a convenience sample give comparable estimates of immunity to vaccine preventable diseases in children of school age in Victoria, Australia.

    Science.gov (United States)

    Kelly, Heath; Riddell, Michaela A; Gidding, Heather F; Nolan, Terry; Gilbert, Gwendolyn L

    2002-08-19

    We compared estimates of the age-specific population immunity to measles, mumps, rubella, hepatitis B and varicella zoster viruses in Victorian school children obtained by a national sero-survey, using a convenience sample of residual sera from diagnostic laboratories throughout Australia, with those from a three-stage random cluster survey. When grouped according to school age (primary or secondary school) there was no significant difference in the estimates of immunity to measles, mumps, hepatitis B or varicella. Compared with the convenience sample, the random cluster survey estimated higher immunity to rubella in samples from both primary (98.7% versus 93.6%, P = 0.002) and secondary school students (98.4% versus 93.2%, P = 0.03). Despite some limitations, this study suggests that the collection of a convenience sample of sera from diagnostic laboratories is an appropriate sampling strategy to provide population immunity data that will inform Australia's current and future immunisation policies. Copyright 2002 Elsevier Science Ltd.

  17. Approximating the variance of estimated means for systematic random sampling, illustrated with data of the French Soil Monitoring Network

    NARCIS (Netherlands)

    Brus, D.J.; Saby, N.P.A.

    2016-01-01

    In France like in many other countries, the soil is monitored at the locations of a regular, square grid thus forming a systematic sample (SY). This sampling design leads to good spatial coverage, enhancing the precision of design-based estimates of spatial means and totals. Design-based

  18. Assessment of dietary intake of flavouring substances within the procedure for their safety evaluation: advantages and limitations of estimates obtained by means of a per capita method.

    Science.gov (United States)

    Arcella, D; Leclercq, C

    2005-01-01

    The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.

  19. Use of tritium for estimation of groundwater mean residence time, a case study of the Ain Al-Samak Karst springs (Central Syria)

    International Nuclear Information System (INIS)

    Kattan, Z.

    2003-01-01

    This work is an attempt to estimate the mean residence time of groundwater in the Ain Al-Tanour and Ain-Samak, which are the major karst springs in the Upper Orontes Basin (Central Syria). This estimate, which consists on the application of a mathematical modeling approach, was based on the use of tritium, as a natural radioisotope tracer and a tool for ground water age dating. By adopting a completely mixed reservoir model, linked with exponential time distribution function, the mean residence time (turnover time) of these two springs was evaluated to be about 50 years. This result is in good agreement with previous estimation obtained for the Figeh main spring, which belongs to the same aquifer (Cenomanian-Turonian complex) in the Damascus Basin. On the basis of this evaluation, a value of about 800 million m 3 was obtained for the maximum groundwater reservoir size

  20. What does it mean to manage sky survey data? A model to facilitate stakeholder conversations

    Science.gov (United States)

    Sands, Ashley E.; Darch, Peter T.

    2016-06-01

    Astronomy sky surveys, while of great scientific value independently, can be deployed even more effectively when multiple sources of data are combined. Integrating discrete datasets is a non-trivial exercise despite investments in standard data formats and tools. Creating and maintaining data and associated infrastructures requires investments in technology and expertise. Combining data from multiple sources necessitates a common understanding of data, structures, and goals amongst relevant stakeholders.We present a model of Astronomy Stakeholder Perspectives on Data. The model is based on 80 semi-structured interviews with astronomers, computational astronomers, computer scientists, and others involved in the building or use of the Sloan Digital Sky Survey (SDSS) and Large Synoptic Survey Telescope (LSST). Interviewees were selected to ensure a range of roles, institutional affiliations, career stages, and level of astronomy education. Interviewee explanations of data were analyzed to understand how perspectives on astronomy data varied by stakeholder.Interviewees described sky survey data either intrinsically or extrinsically. “Intrinsic” descriptions of data refer to data as an object in and of itself. Respondents with intrinsic perspectives view data management in one of three ways: (1) “Medium” - securing the zeros and ones from bit rot; (2) “Scale” - assuring that changes in state are documented; or (3) “Content” - ensuring the scientific validity of the images, spectra, and catalogs.“Extrinsic” definitions, in contrast, define data in relation to other forms of information. Respondents with extrinsic perspectives view data management in one of three ways: (1) “Source” - supporting the integrity of the instruments and documentation; (2) “Relationship” - retaining relationships between data and their analytical byproducts; or (3) “Use” - ensuring that data remain scientifically usable.This model shows how data management can

  1. Influence of the level of fit of a density probability function to wind-speed data on the WECS mean power output estimation

    International Nuclear Information System (INIS)

    Carta, Jose A.; Ramirez, Penelope; Velazquez, Sergio

    2008-01-01

    Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error ε made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R 2 statistic (R a 2 ). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the R a 2 statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as R a 2 increases

  2. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  3. Using recall surveys to estimate harvest of cod, eel and sea migrating brown trout in Danish angling and recreational passive gear fishing

    DEFF Research Database (Denmark)

    Sparrevohn, Claus Reedtz; Nielsen, Jan; Storr-Paulsen, Marie

    , as all recreational fishermen have to purchase a personal non-transferable and time limited national license before fishing. However, this list will not include those fishing illegally without a license. Therefore, two types of recall surveys with their own questionnaires and group of respondents were...... carried out. The first survey - the license list survey – was carried out once in 2009 and twice in 2010. This survey had a sampling frame corresponding to the list of persons that had purchased a license within the last 12 months. Respondents were asked to provide detailed information on catch and effort...... per ICES area and quarter. In order to also estimate the fraction of fishermen that fished without a valid license, a second survey, called – the Omnibus survey-, was carried out four times. This survey targeted the entire Danish population between 16 and 74 of age...

  4. Minimum Mean-Square Error Single-Channel Signal Estimation

    DEFF Research Database (Denmark)

    Beierholm, Thomas

    2008-01-01

    This topic of this thesis is MMSE signal estimation for hearing aids when only one microphone is available. The research is relevant for noise reduction systems in hearing aids. To fully benefit from the amplification provided by a hearing aid, noise reduction functionality is important as hearin...... algorithm. Although performance of the two algorithms is found comparable then the particle filter algorithm is doing a much better job tracking the noise.......-impaired persons in some noisy situations need a higher signal to noise ratio for speech to be intelligible when compared to normal-hearing persons. In this thesis two different methods to approach the MMSE signal estimation problem is examined. The methods differ in the way that models for the signal and noise...... inference is performed by particle filtering. The speech model is a time-varying auto-regressive model reparameterized by formant frequencies and bandwidths. The noise is assumed non-stationary and white. Compared to the case of using the AR coefficients directly then it is found very beneficial to perform...

  5. Feasibility online survey to estimate physical activity level among the students studying professional courses: a cross-sectional online survey.

    Science.gov (United States)

    Sudha, Bhumika; Samuel, Asir John; Narkeesh, Kanimozhi

    2018-02-01

    The aim of the study was to estimate the physical activity (PA) level among the professional college students in North India. One hundred three professional college students in the age group of 18-25 years were recruited by simple random sampling for this cross-sectional online survey. The survey was advertised on the social networking sites (Facebook, WhatsApp) through a link www.surveymonkey.com/r/MG-588BY. A Short Form of International Physical Activity Questionnaire was used for this survey study. The questionnaire included total 8 questions on the basis of previous 7 days. The questionnaire consists of 3 main categories which were vigorous, moderate and high PA. Time spent in each activity level was multiplied with the metabolic equivalent of task (MET), which has previously set to 8.0 for vigorous activity, 4.0 for moderate activity, 3.3 for walking, and 1.5 for sitting. By multiplying MET with number of days and minutes performed weekly, amount of each activity level was calculated and measured as MET-min/wk. Further by adding MET minutes for each activity level, total MET-min/wk was calculated. Total number of 100 students participated in this study, and it was shown that all professional course students show different levels in PA. The total PA level among professional college students, which includes, physiotherapy, dental, medical, nursing, lab technician, pharmacy, management, law, engineering, were 434.4 (0-7,866), 170.3 (0-1,129), 87.7 (0-445), 102.8 (0-180), 469 (0-1,164), 0 (0-0), 645 (0-1,836), 337 (0-1,890), 396 (0-968) MET-min/wk respectively. PA levels among professional college students in North India have been established.

  6. A nationwide survey of radon concentration in Japan. Indoor, outdoor and workplace

    International Nuclear Information System (INIS)

    Sanada, Tetsuya; Oikawa, Shinji; Kanno, Nobuyuki; Abukawa, Johji; Higuchi, Hideo

    2004-01-01

    The nationwide indoor, outdoor and workplace radon concentrations were surveyed in Japan. These surveys were conducted to estimate the natural radiation dose due to radon and its progeny for the general public. The radon concentration was measured using passive type radon monitor. The number of radon monitors were installed at indoor, outdoor and workplace for 940 houses, 705 points and 705 sites, respectively. The radon concentration was measured for one year at each measurement site. Annual mean radon concentration was obtained from four quarters measurements of 47 prefectures in Japan. The nationwide indoor, outdoor and workplace annual mean radon concentration were 15.5 Bq m -3 , 6.1 Bq m -3 and 20.8 Bq m -3 , respectively. Their radon concentration shows approximately a logarithmic normal distribution. Workplace showed relatively high radon concentration compared with other environments, may be due to construction materials and low ventilation rate. The indoor radon concentration found to be seasonal variation and architectural dependences. Seasonal variation and regional distribution of outdoor radon concentration was also observed. From the results of these radon surveys, the annual effective dose to the general public due to radon and its progeny was estimated to be 0.49 mSv y -1 in Japan. (author)

  7. Bayesian estimation applied to multiple species

    International Nuclear Information System (INIS)

    Kunz, Martin; Bassett, Bruce A.; Hlozek, Renee A.

    2007-01-01

    Observed data are often contaminated by undiscovered interlopers, leading to biased parameter estimation. Here we present BEAMS (Bayesian estimation applied to multiple species) which significantly improves on the standard maximum likelihood approach in the case where the probability for each data point being ''pure'' is known. We discuss the application of BEAMS to future type-Ia supernovae (SNIa) surveys, such as LSST, which are projected to deliver over a million supernovae light curves without spectra. The multiband light curves for each candidate will provide a probability of being Ia (pure) but the full sample will be significantly contaminated with other types of supernovae and transients. Given a sample of N supernovae with mean probability, , of being Ia, BEAMS delivers parameter constraints equal to N spectroscopically confirmed SNIa. In addition BEAMS can be simultaneously used to tease apart different families of data and to recover properties of the underlying distributions of those families (e.g. the type-Ibc and II distributions). Hence BEAMS provides a unified classification and parameter estimation methodology which may be useful in a diverse range of problems such as photometric redshift estimation or, indeed, any parameter estimation problem where contamination is an issue

  8. Advancing US GHG Inventory by Incorporating Survey Data using Machine-Learning Techniques

    Science.gov (United States)

    Alsaker, C.; Ogle, S. M.; Breidt, J.

    2017-12-01

    Crop management data are used in the National Greenhouse Gas Inventory that is compiled annually and reported to the United Nations Framework Convention on Climate Change. Emissions for carbon stock change and N2O emissions for US agricultural soils are estimated using the USDA National Resources Inventory (NRI). NRI provides basic information on land use and cropping histories, but it does not provide much detail on other management practices. In contrast, the Conservation Effects Assessment Project (CEAP) survey collects detailed crop management data that could be used in the GHG Inventory. The survey data were collected from NRI survey locations that are a subset of the NRI every 10 years. Therefore, imputation of the CEAP are needed to represent the management practices across all NRI survey locations both spatially and temporally. Predictive mean matching and an artificial neural network methods have been applied to develop imputation model under a multiple imputation framework. Temporal imputation involves adjusting the imputation model using state-level USDA Agricultural Resource Management Survey data. Distributional and predictive accuracy is assessed for the imputed data, providing not only management data needed for the inventory but also rigorous estimates of uncertainty.

  9. Survey of food radioactivity and estimation of internal dose from ingestion in China

    International Nuclear Information System (INIS)

    Zhang Jingyuan; Zhu Hongda; Han Peizhen

    1988-01-01

    In order to provide necessary bases for establishing 'Radionuclide Concentration Limits in Food stuffs', survey on radionuclide contents in Chinese food and estimation of internal dose from ingestion were carried out with the cooperation of 30 radiation protection establishments during the period 1982-1986. Activity concentrations in 14 categories (27 kinds) of Chinese food for 22 radionuclides were determined. In the light of three principal types of Chinese diet, food samples were collected from normal radiation background areas in 14 provinces or autonomous regions and three similarly elevated natural background areas. Annual intake by ingestion and resultant committed dose equivalents to general public for 15 radionuclides in these areas were estimated. In normal background areas the total annual intake of the 15 radionuclides by the public (adlut males) is about 4.2 x 10 4 Bq, and the resultant total committed dose equivalent is about 3.43 x 10 -4 Sv, but in two elevated natural background area the public annual intake and resulting committed dose equivalents for some natural radionulides are much higher than those in normal areas, while no obvious radiocontamination was discoveried relative contribution of each food category or each radionuclide to the total are discussed

  10. Prevalence of HIV among MSM in Europe: comparison of self-reported diagnoses from a large scale internet survey and existing national estimates

    Directory of Open Access Journals (Sweden)

    Marcus Ulrich

    2012-11-01

    Full Text Available Abstract Background Country level comparisons of HIV prevalence among men having sex with men (MSM is challenging for a variety of reasons, including differences in the definition and measurement of the denominator group, recruitment strategies and the HIV detection methods. To assess their comparability, self-reported data on HIV diagnoses in a 2010 pan-European MSM internet survey (EMIS were compared with pre-existing estimates of HIV prevalence in MSM from a variety of European countries. Methods The first pan-European survey of MSM recruited more than 180,000 men from 38 countries across Europe and included questions on the year and result of last HIV test. HIV prevalence as measured in EMIS was compared with national estimates of HIV prevalence based on studies using biological measurements or modelling approaches to explore the degree of agreement between different methods. Existing estimates were taken from Dublin Declaration Monitoring Reports or UNAIDS country fact sheets, and were verified by contacting the nominated contact points for HIV surveillance in EU/EEA countries. Results The EMIS self-reported measurements of HIV prevalence were strongly correlated with existing estimates based on biological measurement and modelling studies using surveillance data (R2=0.70 resp. 0.72. In most countries HIV positive MSM appeared disproportionately likely to participate in EMIS, and prevalences as measured in EMIS are approximately twice the estimates based on existing estimates. Conclusions Comparison of diagnosed HIV prevalence as measured in EMIS with pre-existing estimates based on biological measurements using varied sampling frames (e.g. Respondent Driven Sampling, Time and Location Sampling demonstrates a high correlation and suggests similar selection biases from both types of studies. For comparison with modelled estimates the self-selection bias of the Internet survey with increased participation of men diagnosed with HIV has to be

  11. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  12. Age- and gender-specific estimates of partnership formation and dissolution rates in the Seattle sex survey.

    Science.gov (United States)

    Nelson, Sara J; Hughes, James P; Foxman, Betsy; Aral, Sevgi O; Holmes, King K; White, Peter J; Golden, Matthew R

    2010-04-01

    Partnership formation and dissolution rates are primary determinants of sexually transmitted infection (STI) transmission dynamics. The authors used data on persons' lifetime sexual experiences from a 2003-2004 random digit dialing survey of Seattle residents aged 18-39 years (N=1,194) to estimate age- and gender-specific partnership formation and dissolution rates. Partnership start and end dates were used to estimate participants' ages at the start of each partnership and partnership durations, and partnerships not enumerated in the survey were imputed. Partnership formation peaked at age 19 at 0.9 (95% confidence interval [CI]: 0.76-1.04) partnerships per year and decreased to 0.1 to 0.2 after age 30 for women and peaked at age 20 at 1.4 (95% CI: 1.08-1.64) and declined to 0.5 after age 30 for men. Nearly one fourth (23.7%) of partnerships ended within 1 week and more than one half (51.2%) ended within 12 weeks. Most (63.5%) individuals 30 to 39 years of age had not formed a new sexual partnership in the past 3 years. A large proportion of the heterosexual population is no longer at substantial STI risk by their early 30s, but similar analyses among high-risk populations may give insight into reasons for the profound disparities in STI rates across populations. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  13. ESTIMATION OF INSULATOR CONTAMINATIONS BY MEANS OF REMOTE SENSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    G. Han

    2016-06-01

    Full Text Available The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD, digital elevation model (DEM, land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data. Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.

  14. Assessing damage cost estimation of urban pluvial flood risk as a mean of improving climate change adaptations investments

    DEFF Research Database (Denmark)

    Skovgård Olsen, Anders; Zhou, Qianqian; Linde, Jens Jørgen

    Estimating the expected annual damage (EAD) due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how it will be affected by climate change and how much can be gained by implementing adaptation...... measures. This study investigates three different methods for estimating the EAD based on a loglinear relation between the damage costs and the return periods, one of which has been used in previous studies. The results show with the increased amount of data points there appears to be a shift in the log......-linear relation which could be contributed by the Danish design standards for drainage systems. Three different methods for estimating the EAD were tested and the choice of method is less important than accounting for the log-linear shift. This then also means that the statistical approximation of the EAD used...

  15. Population hemoglobin mean and anemia prevalence in Papua New Guinea: new metrics for defining malaria endemicity?

    Directory of Open Access Journals (Sweden)

    Nicolas Senn

    Full Text Available BACKGROUND: The hypothesis is that hemoglobin-based metrics are useful tools for estimating malaria endemicity and for monitoring malaria control strategies. The aim of this study is to compare population hemoglobin mean and anemia prevalence to established indicators of malaria endemicity, including parasite rates, rates of enlarged spleens in children, and records of (presumptive malaria diagnosis among populations living with different levels of malaria transmission. METHODOLOGY/PRINCIPAL FINDINGS: Convenience sample, multisite cross-sectional household surveys conducted in Papua New Guinea. Correlations (r(2 between population Hb mean and anemia prevalence and altitude, parasite rate, and spleen rate were investigated in children ages 2 to 10 years, and in the general population; 21,664 individuals from 156 different communities were surveyed. Altitude ranged from 5 to 2120 meters. In young children, correlations between altitude and parasite rate, population Hb mean, anemia prevalence, and spleen rate were high (r(2: -0.77, 0.73, -0.81, and -0.68; p1500 m (p<0.001. CONCLUSIONS/SIGNIFICANCE: In PNG, where Plasmodium vivax accounts for an important part of all malaria infections, population hemoglobin mean and anemia prevalence correlate well with altitude, parasite, and spleen rates. Hb measurement is simple and affordable, and may be a useful new tool, alone or in association with other metrics, for estimating malaria endemicity and monitoring effectiveness of malaria control programs. Further prospective studies in areas with different malaria epidemiology and different factors contributing to the burden of anemia are warranted to investigate the usefulness of Hb metrics in monitoring malaria transmission intensity.

  16. Relationship between mean daily energy intake and frequency of consumption of out-of-home meals in the UK National Diet and Nutrition Survey.

    Science.gov (United States)

    Goffe, Louis; Rushton, Stephen; White, Martin; Adamson, Ashley; Adams, Jean

    2017-09-22

    Out-of-home meals have been characterised as delivering excessively large portions that can lead to high energy intake. Regular consumption is linked to weight gain and diet related diseases. Consumption of out-of-home meals is associated with socio-demographic and anthropometric factors, but the relationship between habitual consumption of such meals and mean daily energy intake has not been studied in both adults and children in the UK. We analysed adult and child data from waves 1-4 of the UK National Diet and Nutrition Survey using generalized linear modelling. We investigated whether individuals who report a higher habitual consumption of meals out in a restaurant or café, or takeaway meals at home had a higher mean daily energy intake, as estimated by a four-day food diary, whilst adjusting for key socio-demographic and anthropometric variables. Adults who ate meals out at least weekly had a higher mean daily energy intake consuming 75-104 kcal more per day than those who ate these meals rarely. The equivalent figures for takeaway meals at home were 63-87 kcal. There was no association between energy intake and frequency of consumption of meals out in children. Children who ate takeaway meals at home at least weekly consumed 55-168 kcal more per day than those who ate these meals rarely. Additionally, in children, there was an interaction with socio-economic position, where greater frequency of consumption of takeaway meals was associated with higher mean daily energy intake in those from less affluent households than those from more affluent households. Higher habitual consumption of out-of-home meals is associated with greater mean daily energy intake in the UK. More frequent takeaway meal consumption in adults and children is associated with greater daily energy intake and this effect is greater in children from less affluent households. Interventions seeking to reduce energy content through reformulation or reduction of portion sizes in restaurants

  17. Conversion factors for estimating release rate of gaseous radioactivity by an aerial survey

    International Nuclear Information System (INIS)

    Saito, Kimiaki; Moriuchi, Shigeru

    1988-02-01

    Conversion factors necessary for estimating release rate of gaseous radioactivity by an aerial survey are presented. The conversion factors were determined based on calculation assuming a Gaussian plume model as a function of atmospheric stability, down-wind distance and flight height. First, the conversion factors for plumes emitting mono-energy gamma rays were calculated, then, conversion factors were constructed through convolution for the radionuclides essential in an accident of a nuclear reactor, and for mixtures of these radionuclides considering elapsed time after shutdown. These conversion factors are shown in figures, and also polynomial expressions of the conversion factors as a function of height have been decided with the least-squares method. A user can easily obtain proper conversion factors from data shown here. (author)

  18. Estimation of mean time to failure of a near surface radioactive waste repository for PWR power stations

    International Nuclear Information System (INIS)

    Aguiar, Lais A. de; Frutuoso e Melo, P.F.; Alvim, Antonio C.M.

    2007-01-01

    This work aims at estimating the mean time to failure (MTTF) of each barrier of a near surface radioactive waste repository. It is assumed that surface water infiltrates through the barriers, reaching the matrix where radionuclides are contained, releasing them to the environment. Radioactive wastes considered in this work are low and medium level wastes (produced during operation of a PWR nuclear power station) fixed on cement. The repository consists of 6 saturated porous media barriers (top cover, upper layer, packages, basis, repository walls and geosphere). It has been verified that the mean time to failure (MTTF) of each barrier increases for radionuclides having higher retardation factor (Fr) and also that the MTTF for concrete is larger for Nickel , while for the geosphere, Plutonium gives the largest MTTF. (author)

  19. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    Directory of Open Access Journals (Sweden)

    Alexander Richard Braczkowski

    Full Text Available Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96 or temporal activity of female (p = 0.12 or male leopards (p = 0.79, and the assumption of geographic closure was met for both surveys (p >0.05. The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90. Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2 were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2. The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.

  20. 2015-2016 Palila abundance estimates

    Science.gov (United States)

    Camp, Richard J.; Brinck, Kevin W.; Banko, Paul C.

    2016-01-01

    The palila (Loxioides bailleui) population was surveyed annually during 1998−2016 on Mauna Kea Volcano to determine abundance, population trend, and spatial distribution. In the latest surveys, the 2015 population was estimated at 852−1,406 birds (point estimate: 1,116) and the 2016 population was estimated at 1,494−2,385 (point estimate: 1,934). Similar numbers of palila were detected during the first and subsequent counts within each year during 2012−2016; the proportion of the total annual detections in each count ranged from 46% to 56%; and there was no difference in the detection probability due to count sequence. Furthermore, conducting repeat counts improved the abundance estimates by reducing the width of the confidence intervals between 9% and 32% annually. This suggests that multiple counts do not affect bird or observer behavior and can be continued in the future to improve the precision of abundance estimates. Five palila were detected on supplemental survey stations in the Ka‘ohe restoration area, outside the core survey area but still within Palila Critical Habitat (one in 2015 and four in 2016), suggesting that palila are present in habitat that is recovering from cattle grazing on the southwest slope. The average rate of decline during 1998−2016 was 150 birds per year. Over the 18-year monitoring period, the estimated rate of change equated to a 58% decline in the population.

  1. Genetic algorithm-based optimization of testing and maintenance under uncertain unavailability and cost estimation: A survey of strategies for harmonizing evolution and accuracy

    International Nuclear Information System (INIS)

    Villanueva, J.F.; Sanchez, A.I.; Carlos, S.; Martorell, S.

    2008-01-01

    This paper presents the results of a survey to show the applicability of an approach based on a combination of distribution-free tolerance interval and genetic algorithms for testing and maintenance optimization of safety-related systems based on unavailability and cost estimation acting as uncertain decision criteria. Several strategies have been checked using a combination of Monte Carlo (simulation)--genetic algorithm (search-evolution). Tolerance intervals for the unavailability and cost estimation are obtained to be used by the genetic algorithms. Both single- and multiple-objective genetic algorithms are used. In general, it is shown that the approach is a robust, fast and powerful tool that performs very favorably in the face of noise in the output (i.e. uncertainty) and it is able to find the optimum over a complicated, high-dimensional nonlinear space in a tiny fraction of the time required for enumeration of the decision space. This approach reduces the computational effort by means of providing appropriate balance between accuracy of simulation and evolution; however, negative effects are also shown when a not well-balanced accuracy-evolution couple is used, which can be avoided or mitigated with the use of a single-objective genetic algorithm or the use of a multiple-objective genetic algorithm with additional statistical information

  2. Repeat participation in annual cross-sectional surveys of drug users and its implications for analysis.

    Science.gov (United States)

    Agius, P A; Aitken, C K; Breen, C; Dietze, P M

    2018-06-04

    We sought to establish the extent of repeat participation in a large annual cross-sectional survey of people who inject drugs and assess its implications for analysis. We used "porn star names" (the name of each participant's first pet followed by the name of the first street in which they lived) to identify repeat participation in three Australian Illicit Drug Reporting System surveys. Over 2013-2015, 2468 porn star names (96.2%) appeared only once, 88 (3.4%) twice, and nine (0.4%) in all 3 years. We measured design effects, based on the between-cluster variability for selected estimates, of 1.01-1.07 for seven key variables. These values indicate that the complex sample is (e.g.) 7% less efficient in estimating prevalence of heroin use (ever) than a simple random sample, and 1% less efficient in estimating number of heroin overdoses (ever). Porn star names are a useful means of tracking research participants longitudinally while maintaining their anonymity. Repeat participation in the Australian Illicit Drug Reporting System is low (less than 5% per annum), meaning point-prevalence and effect estimation without correction for the lack of independence in observations is unlikely to seriously affect population inference.

  3. Estimation of light commercial vehicles dynamics by means of HIL-testbench simulation

    Science.gov (United States)

    Groshev, A.; Tumasov, A.; Toropov, E.; Sereda, P.

    2018-02-01

    The high level of active safety of vehicles is impossible without driver assistance electronic systems. Electronic stability control (ESC) system is one of them. Nowadays such systems are obligatory for installation on vehicles of different categories. The approval of active safety level of vehicles with ESC is possible by means of high speed road tests. The most frequently implemented tests are “fish hook” and “sine with dwell” tests. Such kind of tests provided by The Global technical regulation No. 8 are published by the United Nations Economic Commission for Europe as well as by ECE 13-11. At the same time, not only road tests could be used for estimation of vehicles dynamics. Modern software and hardware technologies allow imitating real tests with acceptable reliability and good convergence between real test data and simulation results. ECE 13-11 Annex 21 - Appendix 1 “Use Of The Dynamic Stability Simulation” regulates demands for special Simulation Test bench that could be used not only for preliminary estimation of vehicles dynamics, but also for official vehicles homologation. This paper describes the approach, proposed by the researchers from Nizhny Novgorod State Technical University n.a. R.E. Alekseev (NNSTU, Russia) with support of engineers of United Engineering Center GAZ Group, as well as specialists of Gorky Automobile Plant. The idea of approach is to use the special HIL (hardware in the loop) -test bench, that consists of Real Time PC with Real Time Software and braking system components including electronic control unit (ECU) of ESC system. The HIL-test bench allows imitating vehicle dynamics in condition of “fish hook” and “sine with dwell” tests. The paper describes the scheme and structure of HIL-test bench and some peculiarities that should be taken into account during HIL-simulation.

  4. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Mean field games

    KAUST Repository

    Gomes, Diogo A.

    2014-01-06

    In this talk we will report on new results concerning the existence of smooth solutions for time dependent mean-field games. This new result is established through a combination of various tools including several a-priori estimates for time-dependent mean-field games combined with new techniques for the regularity of Hamilton-Jacobi equations.

  6. Mean field games

    KAUST Repository

    Gomes, Diogo A.

    2014-01-01

    In this talk we will report on new results concerning the existence of smooth solutions for time dependent mean-field games. This new result is established through a combination of various tools including several a-priori estimates for time-dependent mean-field games combined with new techniques for the regularity of Hamilton-Jacobi equations.

  7. Data Processing Procedures and Methodology for Estimating Trip Distances for the 1995 American Travel Survey (ATS)

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, H.-L.; Rollow, J.

    2000-05-01

    The 1995 American Travel Survey (ATS) collected information from approximately 80,000 U.S. households about their long distance travel (one-way trips of 100 miles or more) during the year of 1995. It is the most comprehensive survey of where, why, and how U.S. residents travel since 1977. ATS is a joint effort by the U.S. Department of Transportation (DOT) Bureau of Transportation Statistics (BTS) and the U.S. Department of Commerce Bureau of Census (Census); BTS provided the funding and supervision of the project, and Census selected the samples, conducted interviews, and processed the data. This report documents the technical support for the ATS provided by the Center for Transportation Analysis (CTA) in Oak Ridge National Laboratory (ORNL), which included the estimation of trip distances as well as data quality editing and checking of variables required for the distance calculations.

  8. The Gender Wage Gap in Croatia – Estimating the Impact of Differing Rewards by Means of Counterfactual Distributions

    Directory of Open Access Journals (Sweden)

    Danijel Nestić

    2010-04-01

    Full Text Available The aim of this paper is to estimate the size of, changes in, and main factors contributing to gender-based wage differentials in Croatia. It utilizes microdata from the Labor Force Surveys of 1998 and 2008 and applies both OLS and quantile regression techniques to assess the gender wage gap across the wage distribution. The average unadjusted gender wage gap is found to be relatively low and declining. This paper argues that employed women in Croatia possess higher-quality labor market characteristics than men, especially in terms of education, but receive much lower rewards for these characteristics. The Machado-Mata decomposition technique is used to estimate the gender wage gap as the sole effect of differing rewards. The results suggest that due to differing rewards the gap exceeds 20 percent on average - twice the size of the unadjusted gap - and that it increased somewhat between 1998 and 2008. The gap is found to be the highest at the lower-to-middle part of the wage distribution.

  9. A Life in the Universe Survey

    Science.gov (United States)

    LoPresto, Michael C.; Hubble-Zdanowski, Jennifer

    2012-01-01

    The "Life in the Universe Survey" is a twelve-question assessment instrument. Largely based on the factors of the Drake equation, it is designed to survey students' initial estimates of its factors and to gauge how estimates change with instruction. The survey was used in sections of a seminar course focusing specifically on life in the universe…

  10. Using the Pareto Distribution to Improve Estimates of Topcoded Earnings

    OpenAIRE

    Philip Armour; Richard V. Burkhauser; Jeff Larrimore

    2014-01-01

    Inconsistent censoring in the public-use March Current Population Survey (CPS) limits its usefulness in measuring labor earnings trends. Using Pareto estimation methods with less-censored internal CPS data, we create an enhanced cell-mean series to capture top earnings in the public-use CPS. We find that previous approaches for imputing topcoded earnings systematically understate top earnings. Annual earnings inequality trends since 1963 using our series closely approximate those found by Kop...

  11. Estimation of Finite Population Mean in Multivariate Stratified Sampling under Cost Function Using Goal Programming

    Directory of Open Access Journals (Sweden)

    Atta Ullah

    2014-01-01

    Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.

  12. THE ASSESSMENT OF GEOTHERMAL POTENTIAL OF TURKEY BY MEANS OF HEAT FLOW ESTIMATION

    Directory of Open Access Journals (Sweden)

    UĞUR AKIN

    2014-12-01

    Full Text Available In this study, the heat flow distribution of Turkey was investigated in the interest ofexploring new geothermal fields in addition to known ones. For this purposes, thegeothermal gradient was estimated from the Curie point depth map obtained from airbornemagnetic data by means of power spectrum method. By multiplying geothermal gradientwith thermal conductivity values, the heat flow map of Turkey was obtained. The averagevalue in the heat flow map of Turkey was determined as 74 mW/m2. It points out existenceof resources of geothermal energy larger than the average of the world resources. in termsof geothermal potential, the most significant region of Turkey is the Aydin and itssurrounding with a value exceeding 200 mW/m2. On the contrary, the value decreasesbelow 30 mW/m2in the region bordered by Aksaray, Niğde, Karaman and Konya. Thenecessity of conducting a detailed additional studies for East Black sea, East and SoutheastAnatolia is also revealed

  13. Influence of the level of fit of a density probability function to wind-speed data on the WECS mean power output estimation

    Energy Technology Data Exchange (ETDEWEB)

    Carta, Jose A. [Department of Mechanical Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Ramirez, Penelope; Velazquez, Sergio [Department of Renewable Energies, Technological Institute of the Canary Islands, Pozo Izquierdo Beach s/n, 35119 Santa Lucia, Gran Canaria, Canary Islands (Spain)

    2008-10-15

    Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error {epsilon} made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R{sup 2} statistic (R{sub a}{sup 2}). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the R{sub a}{sup 2} statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as R{sub a}{sup 2} increases. (author)

  14. National Diet and Nutrition Survey: fat and fatty acid intakes from the first year of the rolling programme and comparison with previous surveys

    Science.gov (United States)

    Pot, Gerda K.; Prynne, Celia J.; Roberts, Caireen; Olson, Ashley; Nicholson, Sonja K.; Whitton, Clare; Teucher, Birgit; Bates, Beverley; Henderson, Helen; Pigott, Sarah; Swan, Gillian; Stephen, Alison M.

    2012-01-01

    High saturated fat intake is an established risk factor for several chronic diseases. The objective of the present study is to report dietary intakes and main food sources of fat and fatty acids (FA) from the first year of the National Diet and Nutrition Survey (NDNS) rolling programme in the UK. Dietary data were collected using 4d estimated food diaries (n896) and compared with dietary reference values (DRV) and previous NDNS results. Total fat provided 34–36% food energy (FE) across all age groups, which was similar to previous surveys for adults. Men (19–64 years) and older girls (11–18 years) had mean intakes just above the DRV, while all other groups had mean total fat intakes of <35% FE. SFA intakes were lower compared with previous surveys, ranging from 13 to 15% FE, but still above the DRV. Mean MUFA intakes were 12.5% FE for adults and children aged 4–18 years and all were below the DRV. Mean n–3 PUFA intake represented 0.7–1.1% FE. Compared with previous survey data, the direction of change for n–3 PUFA was upwards for all age groups, although the differences in absolute terms were very small. Trans-FA intakes were lower than in previous NDNS and were less than 2g/d for all age groups, representing 0.8% FE and lower than the DRV in all age groups. In conclusion, dietary intake of fat and FA is moving towards recommended levels for the UK population. However, there remains room for considerable further improvement. PMID:21767448

  15. Estimating infertility prevalence in low-to-middle-income countries: an application of a current duration approach to Demographic and Health Survey data

    OpenAIRE

    Polis, Chelsea B.; Cox, Carie M.; Tun?alp, ?zge; McLain, Alexander C.; Thoma, Marie E.

    2017-01-01

    Abstract STUDY QUESTION Can infertility prevalence be estimated using a current duration (CD) approach when applied to nationally representative Demographic and Health Survey (DHS) data collected routinely in low- or middle-income countries? SUMMARY ANSWER Our analysis suggests that a CD approach applied to DHS data from Nigeria provides infertility prevalence estimates comparable to other smaller studies in the same region. WHAT IS KNOWN ALREADY Despite associations with serious negative hea...

  16. Comparison of Paper-and-Pencil versus Web Administration of the Youth Risk Behavior Survey (YRBS): Risk Behavior Prevalence Estimates

    Science.gov (United States)

    Eaton, Danice K.; Brener, Nancy D.; Kann, Laura; Denniston, Maxine M.; McManus, Tim; Kyle, Tonja M.; Roberts, Alice M.; Flint, Katherine H.; Ross, James G.

    2010-01-01

    The authors examined whether paper-and-pencil and Web surveys administered in the school setting yield equivalent risk behavior prevalence estimates. Data were from a methods study conducted by the Centers for Disease Control and Prevention (CDC) in spring 2008. Intact classes of 9th- or 10th-grade students were assigned randomly to complete a…

  17. Estimating costs of pressure area management based on a survey of ulcer care in one Irish hospital.

    Science.gov (United States)

    Gethin, G; Jordan-O'Brien, J; Moore, Z

    2005-04-01

    Pressure ulceration remains a significant cause of morbidity for patients and has a real economic impact on the health sector. Studies to date have estimated the cost of management but have not always given a breakdown of how these figures were calculated. There are no published studies that have estimated the cost of management of pressure ulcers in Ireland. A two-part study was therefore undertaken. Part one determined the prevalence of pressure ulcers in a 626-bed Irish acute hospital. Part two set out to derive a best estimate of the cost of managing pressure ulcers in Ireland. The European Pressure UlcerAdvisory Panel (EPUAP) minimum data set tool was used to complete the prevalence survey. Tissue viability nurses trained in the data-collection tool collected the data. A cost was obtained for all items of care for the management of one patient with three grade IV pressure ulcers over a five-month period. Of the patients, 2.5% had pressure ulcers. It cost Euros 119,000 to successfully treat one patient. We estimate that it costs Euros 250,000,000 per annum to manage pressure ulcers across all care settings in Ireland.

  18. Estimation of mean glandular dose for patients who undergo mammography and studying the factors affecting it

    Science.gov (United States)

    Barzanje, Sana L. N. H.; Harki, Edrees M. Tahir Nury

    2017-09-01

    The objective of this study was to determine mean glandular dose (MGD) during diagnostic mammography. This study was done in two hospitals in Hawler city in Kurdistan -region /Iraq, the exposure parameters kVp and mAs was recorded for 40 patients under go mammography. The MGD estimated by multiplied ESD with normalized glandular dose (Dn). The ESD measured indirectly by measuring output radiation mGy/mAs by using PalmRAD 907 as a suitable detector (Gigger detector).the results; shown that the mean and its standard deviation of MGD for Screen Film Mammography and Digital Mammography are (0.95±0.18)mGy and (0.99±0.26)mGy, respectively. And there is a significant difference between MGD for Screen Film Mammography and Digital Mammography views (p≤0. 05). Also the mean value and its standard deviation of MGD for screen film mammography is (0.96±0.21) for CC projection and (1.03±0.3) mGy for MLO projection, but mean value and its standard deviation evaluated of MGD for digital mammography is (0.92±0.17) mGy for CC projection and (0.98±0.2) mGy for MLO projection. As well as, the effect of kVp and mAs in MGD were studied, shows that in general as kVp and mAs increased the MGD increased accordingly in both of mammography systems.

  19. Estimating cetacean density and abundance in the Central and Western Mediterranean Sea through aerial surveys: Implications for management

    Science.gov (United States)

    Panigada, Simone; Lauriano, Giancarlo; Donovan, Greg; Pierantonio, Nino; Cañadas, Ana; Vázquez, José Antonio; Burt, Louise

    2017-07-01

    Systematic, effective monitoring of animal population parameters underpins successful conservation strategy and wildlife management, but it is often neglected in many regions, including much of the Mediterranean Sea. Nonetheless, a series of systematic multispecies aerial surveys was carried out in the seas around Italy to gather important baseline information on cetacean occurrence, distribution and abundance. The monitored areas included the Pelagos Sanctuary, the Tyrrhenian Sea, portions of the Seas of Corsica and Sardinia, the Ionian Seas as well as the Gulf of Taranto. Overall, approximately 48,000 km were flown in either spring, summer and winter between 2009-2014, covering an area of 444,621 km2. The most commonly observed species were the striped dolphin and the fin whale, with 975 and 83 recorded sightings, respectively. Other sighted cetacean species were the common bottlenose dolphin, the Risso's dolphin, the sperm whale, the pilot whale and the Cuvier's beaked whale. Uncorrected model- and design-based estimates of density and abundance for striped dolphins and fin whales were produced, resulting in a best estimate (model-based) of around 95,000 striped dolphins (CV=11.6%; 95% CI=92,900-120,300) occurring in the Pelagos Sanctuary, Central Tyrrhenian and Western Seas of Corsica and Sardinia combined area in summer 2010. Estimates were also obtained for each individual study region and year. An initial attempt to estimate perception bias for striped dolphins is also provided. The preferred summer 2010 uncorrected best estimate (design-based) for the same areas for fin whales was around 665 (CV=33.1%; 95% CI=350-1260). Estimates are also provided for the individual study regions and years. The results represent baseline data to develop efficient, long-term, systematic monitoring programmes, essential to evaluate trends, as required by a number of national and international frameworks, and stress the need to ensure that surveys are undertaken regularly and

  20. AFSC/RACE/GAP/Palsson: Gulf of Alaska and Aleutian Islands Biennial Bottom Trawl Survey estimates of catch per unit effort, biomass, population at length, and associated tables

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The GOA/AI Bottom Trawl Estimate database contains abundance estimates for the Alaska Biennial Bottom Trawl Surveys conducted in the Gulf of Alaska and the Aleutian...

  1. Operative and consultative proportions of neurosurgical disease worldwide: estimation from the surgeon perspective.

    Science.gov (United States)

    Dewan, Michael C; Rattani, Abbas; Baticulon, Ronnie E; Faruque, Serena; Johnson, Walter D; Dempsey, Robert J; Haglund, Michael M; Alkire, Blake C; Park, Kee B; Warf, Benjamin C; Shrime, Mark G

    2018-05-11

    OBJECTIVE The global magnitude of neurosurgical disease is unknown. The authors sought to estimate the surgical and consultative proportion of diseases commonly encountered by neurosurgeons, as well as surgeon case volume and perceived workload. METHODS An electronic survey was sent to 193 neurosurgeons previously identified via a global surgeon mapping initiative. The survey consisted of three sections aimed at quantifying surgical incidence of neurological disease, consultation incidence, and surgeon demographic data. Surgeons were asked to estimate the proportion of 11 neurological disorders that, in an ideal world, would indicate either neurosurgical operation or neurosurgical consultation. Respondent surgeons indicated their confidence level in each estimate. Demographic and surgical practice characteristics-including case volume and perceived workload-were also captured. RESULTS Eighty-five neurosurgeons from 57 countries, representing all WHO regions and World Bank income levels, completed the survey. Neurological conditions estimated to warrant neurosurgical consultation with the highest frequency were brain tumors (96%), spinal tumors (95%), hydrocephalus (94%), and neural tube defects (92%), whereas stroke (54%), central nervous system infection (58%), and epilepsy (40%) carried the lowest frequency. Similarly, surgery was deemed necessary for an average of 88% cases of hydrocephalus, 82% of spinal tumors and neural tube defects, and 78% of brain tumors. Degenerative spine disease (42%), stroke (31%), and epilepsy (24%) were found to warrant surgical intervention less frequently. Confidence levels were consistently high among respondents (lower quartile > 70/100 for 90% of questions), and estimates did not vary significantly across WHO regions or among income levels. Surgeons reported performing a mean of 245 cases annually (median 190). On a 100-point scale indicating a surgeon's perceived workload (0-not busy, 100-overworked), respondents selected a

  2. COMPARING H{alpha} AND H I SURVEYS AS MEANS TO A COMPLETE LOCAL GALAXY CATALOG IN THE ADVANCED LIGO/VIRGO ERA

    Energy Technology Data Exchange (ETDEWEB)

    Metzger, Brian D. [Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08542 (United States); Kaplan, David L. [Physics Department, University of Wisconsin-Milwaukee, Milwaukee, WI 53211 (United States); Berger, Edo, E-mail: bmetzger@astro.princeton.edu, E-mail: kaplan@uwm.edu, E-mail: eberger@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-02-20

    Identifying the electromagnetic counterparts of gravitational wave (GW) sources detected by upcoming networks of advanced ground-based interferometers will be challenging, due in part to the large number of unrelated astrophysical transients within the {approx}10-100 deg{sup 2} sky localizations. A potential way to greatly reduce the number of such false positives is to limit detailed follow-up to only those candidates near galaxies within the GW sensitivity range of {approx}200 Mpc for binary neutron star mergers. Such a strategy is currently hindered by the fact that galaxy catalogs are grossly incomplete within this volume. Here, we compare two methods for completing the local galaxy catalog: (1) a narrowband H{alpha} imaging survey and (2) an H I emission line radio survey. Using H{alpha} fluxes, stellar masses (M {sub *}), and star formation rates (SFRs) from galaxies in the Sloan Digital Sky Survey (SDSS), combined with H I data from the GALEX Arecibo SDSS Survey and the Herschel Reference Survey, we estimate that an H{alpha} survey with a luminosity sensitivity of L {sub H{alpha}} = 10{sup 40} erg s{sup -1} at 200 Mpc could achieve a completeness of f {sup H{alpha}} {sub SFR} Almost-Equal-To 75% with respect to total SFR, but only f{sub M* Star-Operator }{sup H{alpha}} approx. 33% with respect to M {sub *} (due to lack of sensitivity to early-type galaxies). These numbers are significantly lower than those achieved by an idealized spectroscopic survey due to the loss of H{alpha} flux resulting from resolving out nearby galaxies and the inability to correct for the underlying stellar continuum. An H I survey with sensitivity similar to the proposed WALLABY survey on ASKAP could achieve f{sub SFR}{sup H{sub I}} Almost-Equal-To 80% and f{sub M Star-Operator }{sup H{sub I}} Almost-Equal-To 50%, somewhat higher than that of the H{alpha} survey. Finally, both H{alpha} and H I surveys should achieve {approx}> 50% completeness with respect to the host galaxies of

  3. Can the use of psychoactive drugs in the general adult population be estimated based on data from a roadside survey of drugs and driving?

    Directory of Open Access Journals (Sweden)

    Hallvard Gjerde

    2011-12-01

    Full Text Available A roadside survey of drugs and driving was performed in south-eastern Norway in 2005-6. Samples of saliva from a total of 10,503 drivers above 20 years of age were analysed, and the results were weighted for under- and over-sampling compared to the population distribution in the study area. Weighted results were compared with data on dispensed prescriptions of zopiclone, codeine and diazepam at Norwegian pharmacies in the same area and with self-reported use of cannabis. When using roadside data to estimate drug use, the use of medicinal drugs was under-estimated by 17-59% compared to amounts dispensed. One of the main reasons for the under-estimation may be that a large proportion of the users of psychoactive medicinal drugs are not frequent drivers. For cannabis, self-reported data corresponded approximately to the estimated prevalence range. The results indicate that roadside surveys cannot be used for accurate estima tions of drug use in the population, but may provide minimum figures.

  4. Identifying the Correlation between Water Quality Data and LOADEST Model Behavior in Annual Sediment Load Estimations

    Directory of Open Access Journals (Sweden)

    Youn Shik Park

    2016-08-01

    Full Text Available Water quality samples are typically collected less frequently than flow since water quality sampling is costly. Load Estimator (LOADEST, provided by the United States Geological Survey, is used to predict water quality concentration (or load on days when flow data are measured so that the water quality data are sufficient for annual pollutant load estimation. However, there is a need to identify water quality data requirements for accurate pollutant load estimation. Measured daily sediment data were collected from 211 streams. Estimated annual sediment loads from LOADEST and subsampled data were compared to the measured annual sediment loads (true load. The means of flow for calibration data were correlated to model behavior. A regression equation was developed to compute the required mean of flow in calibration data to best calibrate the LOADEST regression model coefficients. LOADEST runs were performed to investigate the correlation between the mean flow in calibration data and model behaviors as daily water quality data were subsampled. LOADEST calibration data used sediment concentration data for flows suggested by the regression equation. Using the mean flow calibrated by the regression equation reduced errors in annual sediment load estimation from −39.7% to −10.8% compared to using all available data.

  5. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India-An application of small area estimation techniques.

    Science.gov (United States)

    Chandra, Hukum; Aditya, Kaustav; Sud, U C

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.

  6. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India—An application of small area estimation techniques

    Science.gov (United States)

    Aditya, Kaustav; Sud, U. C.

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202

  7. Determining the best population-level alcohol consumption model and its impact on estimates of alcohol-attributable harms

    Directory of Open Access Journals (Sweden)

    Kehoe Tara

    2012-04-01

    Full Text Available Abstract Background The goals of our study are to determine the most appropriate model for alcohol consumption as an exposure for burden of disease, to analyze the effect of the chosen alcohol consumption distribution on the estimation of the alcohol Population- Attributable Fractions (PAFs, and to characterize the chosen alcohol consumption distribution by exploring if there is a global relationship within the distribution. Methods To identify the best model, the Log-Normal, Gamma, and Weibull prevalence distributions were examined using data from 41 surveys from Gender, Alcohol and Culture: An International Study (GENACIS and from the European Comparative Alcohol Study. To assess the effect of these distributions on the estimated alcohol PAFs, we calculated the alcohol PAF for diabetes, breast cancer, and pancreatitis using the three above-named distributions and using the more traditional approach based on categories. The relationship between the mean and the standard deviation from the Gamma distribution was estimated using data from 851 datasets for 66 countries from GENACIS and from the STEPwise approach to Surveillance from the World Health Organization. Results The Log-Normal distribution provided a poor fit for the survey data, with Gamma and Weibull distributions providing better fits. Additionally, our analyses showed that there were no marked differences for the alcohol PAF estimates based on the Gamma or Weibull distributions compared to PAFs based on categorical alcohol consumption estimates. The standard deviation of the alcohol distribution was highly dependent on the mean, with a unit increase in alcohol consumption associated with a unit increase in the mean of 1.258 (95% CI: 1.223 to 1.293 (R2 = 0.9207 for women and 1.171 (95% CI: 1.144 to 1.197 (R2 = 0. 9474 for men. Conclusions Although the Gamma distribution and the Weibull distribution provided similar results, the Gamma distribution is recommended to model alcohol

  8. Determination of mean recency period for estimation of HIV type 1 Incidence with the BED-capture EIA in persons infected with diverse subtypes.

    Science.gov (United States)

    Parekh, Bharat S; Hanson, Debra L; Hargrove, John; Branson, Bernard; Green, Timothy; Dobbs, Trudy; Constantine, Niel; Overbaugh, Julie; McDougal, J Steven

    2011-03-01

    The IgG capture BED enzyme immunoassay (BED-CEIA) was developed to detect recent HIV-1 infection for the estimation of HIV-1 incidence from cross-sectional specimens. The mean time interval between seroconversion and reaching a specified assay cutoff value [referred to here as the mean recency period (ω)], an important parameter for incidence estimation, is determined for some HIV-1 subtypes, but testing in more cohorts and new statistical methods suggest the need for a revised estimation of ω in different subtypes. A total of 2927 longitudinal specimens from 756 persons with incident HIV infections who had been enrolled in 17 cohort studies was tested by the BED-CEIA. The ω was determined using two statistical approaches: (1) linear mixed effects regression (ω(1)) and (2) a nonparametric survival method (ω(2)). Recency periods varied among individuals and by population. At an OD-n cutoff of 0.8, ω(1) was 176 days (95% CL 164-188 days) whereas ω(2) was 162 days (95% CL 152-172 days) when using a comparable subset of specimens (13 cohorts). When method 2 was applied to all available data (17 cohorts), ω(2) ranged from 127 days (Thai AE) to 236 days (subtypes AG, AD) with an overall ω(2) of 197 days (95% CL 173-220). About 70% of individuals reached a threshold OD-n of 0.8 by 197 days (mean ω) and 95% of people reached 0.8 OD-n by 480 days. The determination of ω with more data and new methodology suggests that ω of the BED-CEIA varies between different subtypes and/or populations. These estimates for ω may affect incidence estimates in various studies.

  9. Estimating pharmacy level prescription drug acquisition costs for third-party reimbursement.

    Science.gov (United States)

    Kreling, D H; Kirk, K W

    1986-07-01

    Accurate payment for the acquisition costs of drug products dispensed is an important consideration in a third-party prescription drug program. Two alternative methods of estimating these costs among pharmacies were derived and compared. First, pharmacists were surveyed to determine the purchase discounts offered to them by wholesalers. A 10.00% modal and 11.35% mean discount resulted for 73 responding pharmacists. Second, cost-plus percents derived from gross profit margins of wholesalers were calculated and applied to wholesaler product costs to estimate pharmacy level acquisition costs. Cost-plus percents derived from National Median and Southwestern Region wholesaler figures were 9.27% and 10.10%, respectively. A comparison showed the two methods of estimating acquisition costs would result in similar acquisition cost estimates. Adopting a cost-plus estimating approach is recommended because it avoids potential pricing manipulations by wholesalers and manufacturers that would negate improvements in drug product reimbursement accuracy.

  10. The Single Cigarette Economy in India--a Back of the Envelope Survey to Estimate its Magnitude.

    Science.gov (United States)

    Lal, Pranay; Kumar, Ravinder; Ray, Shreelekha; Sharma, Narinder; Bhattarcharya, Bhaktimay; Mishra, Deepak; Sinha, Mukesh K; Christian, Anant; Rathinam, Arul; Singh, Gurbinder

    2015-01-01

    Sale of single cigarettes is an important factor for early experimentation, initiation and persistence of tobacco use and a vital factor in the smoking epidemic in India as it is globally. Single cigarettes also promote the sale of illicit cigarettes and neutralises the effect of pack warnings and effective taxation, making tobacco more accessible and affordable to minors. This is the first study to our knowledge which estimates the size of the single stick market in India. In February 2014, a 10 jurisdiction survey was conducted across India to estimate the sale of cigarettes in packs and sticks, by brands and price over a full business day. We estimate that nearly 75% of all cigarettes are sold as single sticks annually, which translates to nearly half a billion US dollars or 30 percent of the India's excise revenues from all cigarettes. This is the price which the consumers pay but is not captured through tax and therefore pervades into an informal economy. Tracking the retail price of single cigarettes is an efficient way to determine the willingness to pay by cigarette smokers and is a possible method to determine the tax rates in the absence of any other rationale.

  11. Mean temperature of the catch (MTC in the Greek Seas based on landings and survey data

    Directory of Open Access Journals (Sweden)

    Athanassios C. Tsikliras

    2015-04-01

    Full Text Available The mean temperature of the catch (MTC, which is the average inferred temperature preference of the exploited species weighted by their annual catch, is an index that has been used for evaluating the effect of sea warming on marine ecosystems. In the present work, we examined the effect of sea surface temperature on the catch composition of the Greek Seas using the MTC applied on the official catch statistics (landings for the period 1970-2010 (Aegean and Ionian Seas and on experimental bottom trawl survey data for 1997-2014 (southern Aegean Sea. The MTC of the landings for the study period increased from 11.8 οC to 16.2 οC in the Aegean Sea and from 10.0 οC to 14.7 οC in the Ionian Sea. Overall, the rate of MTC increase was 1.01 οC per decade for the Aegean and 1.17 οC per decade for the Ionian Sea and was positively related to sea surface temperature anomalies in both areas. For the survey data, the increase of the MTC of the bottom trawl catch in the southern Aegean Sea was lower (0.51 οC per decade but referred to a shorter time frame and included only demersal species. The change in MTC of official and survey catches indicates that the relative catch proportions of species preferring warmer waters and those preferring colder waters have changed in favour of the former and that this change is linked to sea surface temperature increase, both internally (through the Atlantic Multidecadal Oscillation or externally (warming trend driven.

  12. Statistical estimates of absenteeism attributable to seasonal and pandemic influenza from the Canadian Labour Force Survey.

    Science.gov (United States)

    Schanzer, Dena L; Zheng, Hui; Gilmore, Jason

    2011-04-12

    As many respiratory viruses are responsible for influenza like symptoms, accurate measures of the disease burden are not available and estimates are generally based on statistical methods. The objective of this study was to estimate absenteeism rates and hours lost due to seasonal influenza and compare these estimates with estimates of absenteeism attributable to the two H1N1 pandemic waves that occurred in 2009. Key absenteeism variables were extracted from Statistics Canada's monthly labour force survey (LFS). Absenteeism and the proportion of hours lost due to own illness or disability were modelled as a function of trend, seasonality and proxy variables for influenza activity from 1998 to 2009. Hours lost due to the H1N1/09 pandemic strain were elevated compared to seasonal influenza, accounting for a loss of 0.2% of potential hours worked annually. In comparison, an estimated 0.08% of hours worked annually were lost due to seasonal influenza illnesses. Absenteeism rates due to influenza were estimated at 12% per year for seasonal influenza over the 1997/98 to 2008/09 seasons, and 13% for the two H1N1/09 pandemic waves. Employees who took time off due to a seasonal influenza infection took an average of 14 hours off. For the pandemic strain, the average absence was 25 hours. This study confirms that absenteeism due to seasonal influenza has typically ranged from 5% to 20%, with higher rates associated with multiple circulating strains. Absenteeism rates for the 2009 pandemic were similar to those occurring for seasonal influenza. Employees took more time off due to the pandemic strain than was typical for seasonal influenza.

  13. Statistical estimates of absenteeism attributable to seasonal and pandemic influenza from the Canadian Labour Force Survey

    Science.gov (United States)

    2011-01-01

    Background As many respiratory viruses are responsible for influenza like symptoms, accurate measures of the disease burden are not available and estimates are generally based on statistical methods. The objective of this study was to estimate absenteeism rates and hours lost due to seasonal influenza and compare these estimates with estimates of absenteeism attributable to the two H1N1 pandemic waves that occurred in 2009. Methods Key absenteeism variables were extracted from Statistics Canada's monthly labour force survey (LFS). Absenteeism and the proportion of hours lost due to own illness or disability were modelled as a function of trend, seasonality and proxy variables for influenza activity from 1998 to 2009. Results Hours lost due to the H1N1/09 pandemic strain were elevated compared to seasonal influenza, accounting for a loss of 0.2% of potential hours worked annually. In comparison, an estimated 0.08% of hours worked annually were lost due to seasonal influenza illnesses. Absenteeism rates due to influenza were estimated at 12% per year for seasonal influenza over the 1997/98 to 2008/09 seasons, and 13% for the two H1N1/09 pandemic waves. Employees who took time off due to a seasonal influenza infection took an average of 14 hours off. For the pandemic strain, the average absence was 25 hours. Conclusions This study confirms that absenteeism due to seasonal influenza has typically ranged from 5% to 20%, with higher rates associated with multiple circulating strains. Absenteeism rates for the 2009 pandemic were similar to those occurring for seasonal influenza. Employees took more time off due to the pandemic strain than was typical for seasonal influenza. PMID:21486453

  14. Sample Loss and Survey Bias in Estimates of Social Security Beneficiaries: A Tale of Two Surveys.

    OpenAIRE

    John L. Czajka; James Mabli; Scott Cody

    2008-01-01

    Data from the Census Bureau’s Survey of Income and Program Participation (SIPP) and the Current Population Survey (CPS) provide information on current and potential beneficiaries served by Social Security Administration (SSA) programs. SSA also links administrative records to the records of survey respondents who provide Social Security numbers. These matched data expand the content of the SIPP and CPS files to fields available only through SSA and Internal Revenue Service records—such as l...

  15. SPECIES-SPECIFIC FOREST VARIABLE ESTIMATION USING NON-PARAMETRIC MODELING OF MULTI-SPECTRAL PHOTOGRAMMETRIC POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    J. Bohlin

    2012-07-01

    Full Text Available The recent development in software for automatic photogrammetric processing of multispectral aerial imagery, and the growing nation-wide availability of Digital Elevation Model (DEM data, are about to revolutionize data capture for forest management planning in Scandinavia. Using only already available aerial imagery and ALS-assessed DEM data, raster estimates of the forest variables mean tree height, basal area, total stem volume, and species-specific stem volumes were produced and evaluated. The study was conducted at a coniferous hemi-boreal test site in southern Sweden (lat. 58° N, long. 13° E. Digital aerial images from the Zeiss/Intergraph Digital Mapping Camera system were used to produce 3D point-cloud data with spectral information. Metrics were calculated for 696 field plots (10 m radius from point-cloud data and used in k-MSN to estimate forest variables. For these stands, the tree height ranged from 1.4 to 33.0 m (18.1 m mean, stem volume from 0 to 829 m3 ha-1 (249 m3 ha-1 mean and basal area from 0 to 62.2 m2 ha-1 (26.1 m2 ha-1 mean, with mean stand size of 2.8 ha. Estimates made using digital aerial images corresponding to the standard acquisition of the Swedish National Land Survey (Lantmäteriet showed RMSEs (in percent of the surveyed stand mean of 7.5% for tree height, 11.4% for basal area, 13.2% for total stem volume, 90.6% for pine stem volume, 26.4 for spruce stem volume, and 72.6% for deciduous stem volume. The results imply that photogrammetric matching of digital aerial images has significant potential for operational use in forestry.

  16. American Community Survey (ACS) 5-Year Estimates for Coastal Geographies

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The American Community Survey (ACS) is an ongoing statistical survey that samples a small percentage of the population every year. These data have been apportioned...

  17. Methods for estimating heterocyclic amine concentrations in cooked meats in the US diet.

    Science.gov (United States)

    Keating, G A; Bogen, K T

    2001-01-01

    Heterocyclic amines (HAs) are formed in numerous cooked foods commonly consumed in the diet. A method was developed to estimate dietary HA levels using HA concentrations in experimentally cooked meats reported in the literature and meat consumption data obtained from a national dietary survey. Cooking variables (meat internal temperature and weight loss, surface temperature and time) were used to develop relationships for estimating total HA concentrations in six meat types. Concentrations of five individual HAs were estimated for specific meat type/cooking method combinations based on linear regression of total and individual HA values obtained from the literature. Using these relationships, total and individual HA concentrations were estimated for 21 meat type/cooking method combinations at four meat doneness levels. Reported consumption of the 21 meat type/cooking method combinations was obtained from a national dietary survey and the age-specific daily HA intake calculated using the estimated HA concentrations (ng/g) and reported meat intakes. Estimated mean daily total HA intakes for children (to age 15 years) and adults (30+ years) were 11 and 7.0 ng/kg/day, respectively, with 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) estimated to comprise approximately 65% of each intake. Pan-fried meats were the largest source of HA in the diet and chicken the largest source of HAs among the different meat types.

  18. Mean magnetic susceptibility regularized susceptibility tensor imaging (MMSR-STI) for estimating orientations of white matter fibers in human brain.

    Science.gov (United States)

    Li, Xu; van Zijl, Peter C M

    2014-09-01

    An increasing number of studies show that magnetic susceptibility in white matter fibers is anisotropic and may be described by a tensor. However, the limited head rotation possible for in vivo human studies leads to an ill-conditioned inverse problem in susceptibility tensor imaging (STI). Here we suggest the combined use of limiting the susceptibility anisotropy to white matter and imposing morphology constraints on the mean magnetic susceptibility (MMS) for regularizing the STI inverse problem. The proposed MMS regularized STI (MMSR-STI) method was tested using computer simulations and in vivo human data collected at 3T. The fiber orientation estimated from both the STI and MMSR-STI methods was compared to that from diffusion tensor imaging (DTI). Computer simulations show that the MMSR-STI method provides a more accurate estimation of the susceptibility tensor than the conventional STI approach. Similarly, in vivo data show that use of the MMSR-STI method leads to a smaller difference between the fiber orientation estimated from STI and DTI for most selected white matter fibers. The proposed regularization strategy for STI can improve estimation of the susceptibility tensor in white matter. © 2014 Wiley Periodicals, Inc.

  19. The estimation of patients' views on organizational aspects of a general dental practice by general dental practitioners: a survey study

    Directory of Open Access Journals (Sweden)

    Truin Gert-Jan

    2011-10-01

    Full Text Available Abstract Background Considering the changes in dental healthcare, such as the increasing assertiveness of patients, the introduction of new dental professionals, and regulated competition, it becomes more important that general dental practitioners (GDPs take patients' views into account. The aim of the study was to compare patients' views on organizational aspects of general dental practices with those of GDPs and with GDPs' estimation of patients' views. Methods In a survey study, patients and GDPs provided their views on organizational aspects of a general dental practice. In a second, separate survey, GDPs were invited to estimate patients' views on 22 organizational aspects of a general dental practice. Results For 4 of the 22 aspects, patients and GDPs had the same views, and GDPs estimated patients' views reasonably well: 'Dutch-speaking GDP', 'guarantee on treatment', 'treatment by the same GDP', and 'reminder of routine oral examination'. For 2 aspects ('quality assessment' and 'accessibility for disabled patients' patients and GDPs had the same standards, although the GDPs underestimated the patients' standards. Patients had higher standards than GDPs for 7 aspects and lower standards than GDPs for 8 aspects. Conclusion On most aspects GDPs and patient have different views, except for social desirable aspects. Given the increasing assertiveness of patients, it is startling the GDP's estimated only half of the patients' views correctly. The findings of the study can assist GDPs in adapting their organizational services to better meet the preferences of their patients and to improve the communication towards patients.

  20. A spectral mean for random closed curves

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2016-01-01

    textabstractWe propose a spectral mean for closed sets described by sample points on their boundaries subject to mis-alignment and noise. We derive maximum likelihood estimators for the model and noise parameters in the Fourier domain. We estimate the unknown mean boundary curve by

  1. A spectral chart method for estimating the mean turbulent kinetic energy dissipation rate

    Energy Technology Data Exchange (ETDEWEB)

    Djenidi, L.; Antonia, R.A. [The University of Newcastle, School of Engineering, Newcastle, NSW (Australia)

    2012-10-15

    We present an empirical but simple and practical spectral chart method for determining the mean turbulent kinetic energy dissipation rate left angle {epsilon}right angle in a variety of turbulent flows. The method relies on the validity of the first similarity hypothesis of Kolmogorov (C R (Doklady) Acad Sci R R SS, NS 30:301-305, 1941) (or K41) which implies that spectra of velocity fluctuations scale on the kinematic viscosity {nu} and left angle {epsilon}right angle at large Reynolds numbers. However, the evidence, based on the DNS spectra, points to this scaling being also valid at small Reynolds numbers, provided effects due to inhomogeneities in the flow are negligible. The methods avoid the difficulty associated with estimating time or spatial derivatives of the velocity fluctuations. It also avoids using the second hypothesis of K41, which implies the existence of a -5/3 inertial subrange only when the Taylor microscale Reynolds number R{sub {lambda}} is sufficiently large. The method is in fact applied to the lower wavenumber end of the dissipative range thus avoiding most of the problems due to inadequate spatial resolution of the velocity sensors and noise associated with the higher wavenumber end of this range.The use of spectral data (30 {<=} R{sub {lambda}}{<=} 400) in both passive and active grid turbulence, a turbulent mixing layer and the turbulent wake of a circular cylinder indicates that the method is robust and should lead to reliable estimates of left angle {epsilon}right angle in flows or flow regions where the first similarity hypothesis should hold; this would exclude, for example, the region near a wall. (orig.)

  2. Sampling effects on the identification of roadkill hotspots: Implications for survey design.

    Science.gov (United States)

    Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António

    2015-10-01

    Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from 60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. National-scale crop type mapping and area estimation using multi-resolution remote sensing and field survey

    Science.gov (United States)

    Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M.

    2016-12-01

    Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in

  4. A spectral mean for random closed curves

    NARCIS (Netherlands)

    van Lieshout, Maria Nicolette Margaretha

    2016-01-01

    We propose a spectral mean for closed sets described by sample points on their boundaries subject to mis-alignment and noise. We derive maximum likelihood estimators for the model and noise parameters in the Fourier domain. We estimate the unknown mean boundary curve by back-transformation and

  5. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  6. Mean and extreme radio properties of quasars and the origin of radio emission

    Energy Technology Data Exchange (ETDEWEB)

    Kratzer, Rachael M.; Richards, Gordon T. [Department of Physics, Drexel University, Philadelphia, PA (United States)

    2015-02-01

    We investigate the evolution of both the radio-loud fraction (RLF) and (using stacking analysis) the mean radio loudness of quasars. We consider how these properties evolve as a function of redshift and luminosity, black hole (BH) mass and accretion rate, and parameters related to the dominance of a wind in the broad emission-line region. We match the FIRST source catalog to samples of luminous quasars (both spectroscopic and photometric), primarily from the Sloan Digital Sky Survey. After accounting for catastrophic errors in BH mass estimates at high redshift, we find that both the RLF and the mean radio luminosity increase for increasing BH mass and decreasing accretion rate. Similarly, both the RLF and mean radio loudness increase for quasars that are argued to have weaker radiation line driven wind components of the broad emission-line region. In agreement with past work, we find that the RLF increases with increasing optical/UV luminosity and decreasing redshift, while the mean radio loudness evolves in the exact opposite manner. This difference in behavior between the mean radio loudness and the RLF in L−z may indicate selection effects that bias our understanding of the evolution of the RLF; deeper surveys in the optical and radio are needed to resolve this discrepancy. Finally, we argue that radio-loud (RL) and radio-quiet (RQ) quasars may be parallel sequences, but where only RQ quasars at one extreme of the distribution are likely to become RL, possibly through slight differences in spin and/or merger history.

  7. NEFSC Survey Indices of Abundance

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Northeast Fisheries Survey Bottom trawl survey indices of abundance such as stratified mean number per tow or mean weight per tow by species stock. Includes indices...

  8. Improvement Schemes for Indoor Mobile Location Estimation: A Survey

    Directory of Open Access Journals (Sweden)

    Jianga Shang

    2015-01-01

    Full Text Available Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hidden Markov models. Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR, and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research.

  9. Statistical estimates of absenteeism attributable to seasonal and pandemic influenza from the Canadian Labour Force Survey

    Directory of Open Access Journals (Sweden)

    Zheng Hui

    2011-04-01

    Full Text Available Abstract Background As many respiratory viruses are responsible for influenza like symptoms, accurate measures of the disease burden are not available and estimates are generally based on statistical methods. The objective of this study was to estimate absenteeism rates and hours lost due to seasonal influenza and compare these estimates with estimates of absenteeism attributable to the two H1N1 pandemic waves that occurred in 2009. Methods Key absenteeism variables were extracted from Statistics Canada's monthly labour force survey (LFS. Absenteeism and the proportion of hours lost due to own illness or disability were modelled as a function of trend, seasonality and proxy variables for influenza activity from 1998 to 2009. Results Hours lost due to the H1N1/09 pandemic strain were elevated compared to seasonal influenza, accounting for a loss of 0.2% of potential hours worked annually. In comparison, an estimated 0.08% of hours worked annually were lost due to seasonal influenza illnesses. Absenteeism rates due to influenza were estimated at 12% per year for seasonal influenza over the 1997/98 to 2008/09 seasons, and 13% for the two H1N1/09 pandemic waves. Employees who took time off due to a seasonal influenza infection took an average of 14 hours off. For the pandemic strain, the average absence was 25 hours. Conclusions This study confirms that absenteeism due to seasonal influenza has typically ranged from 5% to 20%, with higher rates associated with multiple circulating strains. Absenteeism rates for the 2009 pandemic were similar to those occurring for seasonal influenza. Employees took more time off due to the pandemic strain than was typical for seasonal influenza.

  10. Influence of Mean Rooftop-Level Estimation Method on Sensible Heat Flux Retrieved from a Large-Aperture Scintillometer Over a City Centre

    Science.gov (United States)

    Zieliński, Mariusz; Fortuniak, Krzysztof; Pawlak, Włodzimierz; Siedlecki, Mariusz

    2017-08-01

    The sensible heat flux ( H) is determined using large-aperture scintillometer (LAS) measurements over a city centre for eight different computation scenarios. The scenarios are based on different approaches of the mean rooftop-level (zH) estimation for the LAS path. Here, zH is determined separately for wind directions perpendicular (two zones) and parallel (one zone) to the optical beam to reflect the variation in topography and building height on both sides of the LAS path. Two methods of zH estimation are analyzed: (1) average building profiles; (2) weighted-average building height within a 250 m radius from points located every 50 m along the optical beam, or the centre of a certain zone (in the case of a wind direction perpendicular to the path). The sensible heat flux is computed separately using the friction velocity determined with the eddy-covariance method and the iterative procedure. The sensitivity of the sensible heat flux and the extent of the scintillometer source area to different computation scenarios are analyzed. Differences reaching up to 7% between heat fluxes computed with different scenarios were found. The mean rooftop-level estimation method has a smaller influence on the sensible heat flux (-4 to 5%) than the area used for the zH computation (-5 to 7%). For the source-area extent, the discrepancies between respective scenarios reached a similar magnitude. The results demonstrate the value of the approach in which zH is estimated separately for wind directions parallel and perpendicular to the LAS optical beam.

  11. Comparison and assessment of aerial and ground estimates of waterbird colonies

    Science.gov (United States)

    Green, M.C.; Luent, M.C.; Michot, T.C.; Jeske, C.W.; Leberg, P.L.

    2008-01-01

    Aerial surveys are often used to quantify sizes of waterbird colonies; however, these surveys would benefit from a better understanding of associated biases. We compared estimates of breeding pairs of waterbirds, in colonies across southern Louisiana, USA, made from the ground, fixed-wing aircraft, and a helicopter. We used a marked-subsample method for ground-counting colonies to obtain estimates of error and visibility bias. We made comparisons over 2 sampling periods: 1) surveys conducted on the same colonies using all 3 methods during 3-11 May 2005 and 2) an expanded fixed-wing and ground-survey comparison conducted over 4 periods (May and Jun, 2004-2005). Estimates from fixed-wing aircraft were approximately 65% higher than those from ground counts for overall estimated number of breeding pairs and for both dark and white-plumaged species. The coefficient of determination between estimates based on ground and fixed-wing aircraft was ???0.40 for most species, and based on the assumption that estimates from the ground were closer to the true count, fixed-wing aerial surveys appeared to overestimate numbers of nesting birds of some species; this bias often increased with the size of the colony. Unlike estimates from fixed-wing aircraft, numbers of nesting pairs made from ground and helicopter surveys were very similar for all species we observed. Ground counts by one observer resulted in underestimated number of breeding pairs by 20% on average. The marked-subsample method provided an estimate of the number of missed nests as well as an estimate of precision. These estimates represent a major advantage of marked-subsample ground counts over aerial methods; however, ground counts are difficult in large or remote colonies. Helicopter surveys and ground counts provide less biased, more precise estimates of breeding pairs than do surveys made from fixed-wing aircraft. We recommend managers employ ground counts using double observers for surveying waterbird colonies

  12. On the evaluation of debris flows dynamics by means of mathematical models

    Directory of Open Access Journals (Sweden)

    M. Arattano

    2003-01-01

    Full Text Available The prediction of debris flow dynamic characteristics in a debris flow prone torrent is generally made through the investigation of past events. This investigation can be carried out through a survey of the marks left by past debris flows along the channel and through a detailed analysis of the type and shape of the deposits found on the debris fan. The rheological behaviour of future debris flows can then be inferred from the results of these surveys and their dynamic characteristics can be estimated applying well known formulas proposed in literature. These latter will make use of the assumptions on the rheological behaviour previously made. This type of estimation has been performed for a debris flow occurred in an instrumented basin, on the North-Eastern Italian Alps, in 1996 and the results have been compared to those obtained by means of a mathematical simulation. For the calibration of the mathematical model the limnographs recorded by three different ultrasonic gauges installed along a torrent reach on the fan were used. The comparison evidenced the importance of time data recordings for a correct prediction of the debris flows dynamics. Without the availability of data recordings, the application of formulas based only on assumptions derived from field analysis could be misleading.

  13. Estimation of Groundwater Recharge in a Japanese Headwater Area by Intensive Collaboration of Field Survey and Modelling Work

    Science.gov (United States)

    Yano, S.; Kondo, H.; Tawara, Y.; Yamada, T.; Mori, K.; Yoshida, A.; Tada, K.; Tsujimura, M.; Tokunaga, T.

    2017-12-01

    It is important to understand groundwater systems, including their recharge, flow, storage, discharge, and withdrawal, so that we can use groundwater resources efficiently and sustainably. To examine groundwater recharge, several methods have been discussed based on water balance estimation, in situ experiments, and hydrological tracers. However, few studies have developed a concrete framework for quantifying groundwater recharge rates in an undefined area. In this study, we established a robust method to quantitatively determine water cycles and estimate the groundwater recharge rate by combining the advantages of field surveys and model simulations. We replicated in situ hydrogeological observations and three-dimensional modeling in a mountainous basin area in Japan. We adopted a general-purpose terrestrial fluid-flow simulator (GETFLOWS) to develop a geological model and simulate the local water cycle. Local data relating to topology, geology, vegetation, land use, climate, and water use were collected from the existing literature and observations to assess the spatiotemporal variations of the water balance from 2011 to 2013. The characteristic structures of geology and soils, as found through field surveys, were parameterized for incorporation into the model. The simulated results were validated using observed groundwater levels and resulted in a Nash-Sutcliffe Model Efficiency Coefficient of 0.92. The results suggested that local groundwater flows across the watershed boundary and that the groundwater recharge rate, defined as the flux of water reaching the local unconfined groundwater table, has values similar to the level estimated in the `the lower soil layers on a long-term basis. This innovative method enables us to quantify the groundwater recharge rate and its spatiotemporal variability with high accuracy, which contributes to establishing a foundation for sustainable groundwater management.

  14. Efficiencies of Internet-based digital and paper-based scientific surveys and the estimated costs and time for different-sized cohorts.

    Directory of Open Access Journals (Sweden)

    Constantin E Uhlig

    Full Text Available To evaluate the relative efficiencies of five Internet-based digital and three paper-based scientific surveys and to estimate the costs for different-sized cohorts.Invitations to participate in a survey were distributed via e-mail to employees of two university hospitals (E1 and E2 and to members of a medical association (E3, as a link placed in a special text on the municipal homepage regularly read by the administrative employees of two cities (H1 and H2, and paper-based to workers at an automobile enterprise (P1 and college (P2 and senior (P3 students. The main parameters analyzed included the numbers of invited and actual participants, and the time and cost to complete the survey. Statistical analysis was descriptive, except for the Kruskal-Wallis-H-test, which was used to compare the three recruitment methods. Cost efficiencies were compared and extrapolated to different-sized cohorts.The ratios of completely answered questionnaires to distributed questionnaires were between 81.5% (E1 and 97.4% (P2. Between 6.4% (P1 and 57.0% (P2 of the invited participants completely answered the questionnaires. The costs per completely answered questionnaire were $0.57-$1.41 (E1-3, $1.70 and $0.80 for H1 and H2, respectively, and $3.36-$4.21 (P1-3. Based on our results, electronic surveys with 10, 20, 30, or 42 questions would be estimated to be most cost (and time efficient if more than 101.6-225.9 (128.2-391.7, 139.8-229.2 (93.8-193.6, 165.8-230.6 (68.7-115.7, or 188.2-231.5 (44.4-72.7 participants were required, respectively.The study efficiency depended on the technical modalities of the survey methods and engagement of the participants. Depending on our study design, our results suggest that in similar projects that will certainly have more than two to three hundred required participants, the most efficient way of conducting a questionnaire-based survey is likely via the Internet with a digital questionnaire, specifically via a centralized e-mail.

  15. Contraceptive failure rates: new estimates from the 1995 National Survey of Family Growth.

    Science.gov (United States)

    Fu, H; Darroch, J E; Haas, T; Ranjit, N

    1999-01-01

    Unintended pregnancy remains a major public health concern in the United States. Information on pregnancy rates among contraceptive users is needed to guide medical professionals' recommendations and individuals' choices of contraceptive methods. Data were taken from the 1995 National Survey of Family Growth (NSFG) and the 1994-1995 Abortion Patient Survey (APS). Hazards models were used to estimate method-specific contraceptive failure rates during the first six months and during the first year of contraceptive use for all U.S. women. In addition, rates were corrected to take into account the underreporting of induced abortion in the NSFG. Corrected 12-month failure rates were also estimated for subgroups of women by age, union status, poverty level, race or ethnicity, and religion. When contraceptive methods are ranked by effectiveness over the first 12 months of use (corrected for abortion underreporting), the implant and injectables have the lowest failure rates (2-3%), followed by the pill (8%), the diaphragm and the cervical cap (12%), the male condom (14%), periodic abstinence (21%), withdrawal (24%) and spermicides (26%). In general, failure rates are highest among cohabiting and other unmarried women, among those with an annual family income below 200% of the federal poverty level, among black and Hispanic women, among adolescents and among women in their 20s. For example, adolescent women who are not married but are cohabiting experience a failure rate of about 31% in the first year of contraceptive use, while the 12-month failure rate among married women aged 30 and older is only 7%. Black women have a contraceptive failure rate of about 19%, and this rate does not vary by family income; in contrast, overall 12-month rates are lower among Hispanic women (15%) and white women (10%), but vary by income, with poorer women having substantially greater failure rates than more affluent women. Levels of contraceptive failure vary widely by method, as well as by

  16. The study of mean glandular dose in mammography in Yazd and the factors affecting it

    International Nuclear Information System (INIS)

    Bouzarjomehri, F.; Mostaar, A.; Ghasemi, A.; Ehramposh, M. H.; Khosravi, H.

    2006-01-01

    The objective of this study was to determine the mean glandular dose resulting from mammography examinations in Yazd, southeastern Iran and to identify the factors affecting it. Patients and Methods: This survey was conducted during May to December 2005 to estimate the mean glandular dose for women undergoing mammography and to report the distribution of dose. compressed breast thickness, glandular tissue content, and mammography technique used. The clinical data were collected from 946 mammograms taken from 246 women who were referred to four mammography centers. The mammography instruments in these centers were four modern units with a molybdenum anode and either molybdenum or rhodium filter. The exposure conditions of each mammogram were recorded. The breast glandular content of each mammogram was estimated by a radiologist. The mean glandular dose was calculated based on measuring the normalized entrance skin dose in air. half value layer, kVp, mAs, breast thickness and glandular content. Half value layer, kVp and entrance skin dose were measured by a solid-state detector. The analytical method of Sobol et al. was used for calculation of mean glandular dose . Results: The mean±SD mean glandular dose per film was.2±0.6 mGy for cranio caudal and 1.63±O.9 mGy for mediolateral oblique views. The mean±SD mean glandular dose per woman was 5.5 3.1.mGy. A positive correlation was found between the beam Half value layer with mean glandular dose (r=O.38) and the breast thickness with mean glandular dose (r=O.5). Conclusion: The mean±SD mean glandular dose per film of 1.42±0.8 mGy in present study was lower than most of similar reports. However, the mean mean glandular dose per woman was higher than that in other studies

  17. Methodological issues in the estimation of parental time – Analysis of measures in a Canadian time-use survey

    OpenAIRE

    Cara B. Fedick; Shelley Pacholok; Anne H. Gauthier

    2005-01-01

    Extensive small scale studies have documented that when people assume the role of assisting a person with impairments or an older person, care activities account for a significant portion of their daily routines. Nevertheless, little research has investigated the problem of measuring the time that carers spend in care-related activities. This paper contrasts two different measures of care time – an estimated average weekly hours question in the 1998 Australian Survey of Disability, Ageing and...

  18. Multiobjective Traffic Signal Control Model for Intersection Based on Dynamic Turning Movements Estimation

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2014-01-01

    Full Text Available The real-time traffic signal control for intersection requires dynamic turning movements as the basic input data. It is impossible to detect dynamic turning movements directly through current traffic surveillance systems, but dynamic origin-destination (O-D estimation can obtain it. However, the combined models of dynamic O-D estimation and real-time traffic signal control are rare in the literature. A framework for the multiobjective traffic signal control model for intersection based on dynamic O-D estimation (MSC-DODE is presented. A state-space model using Kalman filtering is first formulated to estimate the dynamic turning movements; then a revised sequential Kalman filtering algorithm is designed to solve the model, and the root mean square error and mean percentage error are used to evaluate the accuracy of estimated dynamic turning proportions. Furthermore, a multiobjective traffic signal control model is put forward to achieve real-time signal control parameters and evaluation indices. Finally, based on practical survey data, the evaluation indices from MSC-DODE are compared with those from Webster method. The actual and estimated turning movements are further input into MSC-DODE, respectively, and results are also compared. Case studies show that results of MSC-DODE are better than those of Webster method and are very close to unavailable actual values.

  19. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  20. Mean of Microaccelerations Estimate in the Small Spacecraft Internal Environment with the Use of Fuzzy Sets

    Science.gov (United States)

    Sedelnikov, A. V.

    2018-05-01

    Assessment of parameters of rotary motion of the small spacecraft around its center of mass and of microaccelerations using measurements of current from silicon photocells is carried out. At the same time there is a problem of interpretation of ambiguous telemetric data. Current from two opposite sides of the small spacecraft is significant. The mean of removal of such uncertainty is considered. It is based on an fuzzy set. As membership function it is offered to use a normality condition of the direction cosines. The example of uncertainty removal for a prototype of the Aist small spacecraft is given. The offered approach can significantly increase the accuracy of microaccelerations estimate when using measurements of current from silicon photocells.

  1. Estimation of Leakage Potential of Selected Sites in Interstate and Tri-State Canals Using Geostatistical Analysis of Selected Capacitively Coupled Resistivity Profiles, Western Nebraska, 2004

    Science.gov (United States)

    Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.

    2009-01-01

    With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water

  2. Recent Trends in Veteran Unemployment as Measured in the Current Population Survey and the American Community Survey

    National Research Council Canada - National Science Library

    Savych, Bogdan; Klerman, Jacob A; Loughran, David S

    2008-01-01

    This technical report explores recent trends in the unemployment of recent veterans as estimated from two nationally representative surveys, the Current Population Survey "CPS" and the American Community Survey "ACS...

  3. Improved infrared precipitation estimation approaches based on k-means clustering: Application to north Algeria using MSG-SEVIRI satellite data

    Science.gov (United States)

    Mokdad, Fatiha; Haddad, Boualem

    2017-06-01

    In this paper, two new infrared precipitation estimation approaches based on the concept of k-means clustering are first proposed, named the NAW-Kmeans and the GPI-Kmeans methods. Then, they are adapted to the southern Mediterranean basin, where the subtropical climate prevails. The infrared data (10.8 μm channel) acquired by MSG-SEVIRI sensor in winter and spring 2012 are used. Tests are carried out in eight areas distributed over northern Algeria: Sebra, El Bordj, Chlef, Blida, Bordj Menael, Sidi Aich, Beni Ourthilane, and Beni Aziz. The validation is performed by a comparison of the estimated rainfalls to rain gauges observations collected by the National Office of Meteorology in Dar El Beida (Algeria). Despite the complexity of the subtropical climate, the obtained results indicate that the NAW-Kmeans and the GPI-Kmeans approaches gave satisfactory results for the considered rain rates. Also, the proposed schemes lead to improvement in precipitation estimation performance when compared to the original algorithms NAW (Nagri, Adler, and Wetzel) and GPI (GOES Precipitation Index).

  4. CAN DUST EMISSION BE USED TO ESTIMATE THE MASS OF THE INTERSTELLAR MEDIUM IN GALAXIES-A PILOT PROJECT WITH THE HERSCHEL REFERENCE SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Eales, Stephen; Smith, Matthew W. L.; Auld, Robbie; Davies, Jon; Gear, Walter; Gomez, Haley [School of Physics and Astronomy, Cardiff University, Queens Buildings, The Parade, Cardiff CF24 3AA (United Kingdom); Baes, Maarten; De Looze, Ilse; Gentile, Gianfranco; Fritz, Jacopo [Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, B-9000 Gent (Belgium); Bendo, George J. [UK ALMA Regional Centre Node, Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, University of Manchester, Oxford Road, Manchester M13 9PL (United Kingdom); Bianchi, Simone [INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125 Firenze (Italy); Boselli, Alessandro; Ciesla, Laure [Laboratoire d' Astrophysique de Marseilles, UMR6110 CNRS, 38 rue F. Joliot-Curie, F-1338 Marseilles (France); Clements, David [Astrophysics Group, Imperial College, Blackett Lab, Prince Consort Road, London SW7 2AZ (United Kingdom); Cooray, Asantha [Department of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Cortese, Luca [European Southern Observatory, Karl-Schwarzschild-Strasse 2 D-85748, Garching bei Munchen (Germany); Galametz, Maud [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Hughes, Tom [Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China); Madden, Suzanne [Laboratoire AIM, CEA/DSM-CNRS-Universite Paris Diderot, Irfu/Service d' Astrophysique, F-91191 Gif sur Yvette (France); and others

    2012-12-20

    The standard method for estimating the mass of the interstellar medium (ISM) in a galaxy is to use the 21 cm line to trace the atomic gas and the CO 1-0 line to trace the molecular gas. In this paper, we investigate the alternative technique of using the continuum dust emission to estimate the mass of gas in all phases of the ISM. Using Herschel observations of 10 galaxies from the Herschel Reference Survey and the Herschel Virgo Cluster Survey, we show that the emission detected by Herschel is mostly from dust that has a temperature and emissivity index similar to that of dust in the local ISM in our galaxy, with the temperature generally increasing toward the center of each galaxy. We calibrate the dust method using the CO and 21 cm observations to provide an independent estimate of the mass of hydrogen in each galaxy, solving the problem of the uncertain ''X-factor'' for the CO observations by minimizing the dispersion in the ratio of the masses estimated using the two methods. With the calibration for the dust method and the estimate of the X-factor produced in this way, the dispersion in the ratio of the two gas masses is 25%. The calibration we obtain for the dust method is similar to those obtained from Herschel observations of M31 and from Planck observations of the Milky Way. We discuss the practical problems in using this method.

  5. An Additive-Multiplicative Restricted Mean Residual Life Model

    DEFF Research Database (Denmark)

    Mansourvar, Zahra; Martinussen, Torben; Scheike, Thomas H.

    2016-01-01

    mean residual life model to study the association between the restricted mean residual life function and potential regression covariates in the presence of right censoring. This model extends the proportional mean residual life model using an additive model as its covariate dependent baseline....... For the suggested model, some covariate effects are allowed to be time-varying. To estimate the model parameters, martingale estimating equations are developed, and the large sample properties of the resulting estimators are established. In addition, to assess the adequacy of the model, we investigate a goodness...

  6. Estimating Classification Errors Under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC

    Directory of Open Access Journals (Sweden)

    Boeschoten Laura

    2017-12-01

    Full Text Available Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible combinations with scores on other variables. Furthermore, the latent class model, by multiply imputing a new variable, enhances the quality of statistics based on the composite data set. The performance of this method is investigated by a simulation study, which shows that whether or not the method can be applied depends on the entropy R2 of the latent class model and the type of analysis a researcher is planning to do. Finally, the method is applied to public data from Statistics Netherlands.

  7. Diabetes incidence and projections from prevalence surveys in Fiji.

    Science.gov (United States)

    Morrell, Stephen; Lin, Sophia; Tukana, Isimeli; Linhart, Christine; Taylor, Richard; Vatucawaqa, Penina; Magliano, Dianna J; Zimmet, Paul

    2016-11-25

    Type 2 diabetes mellitus (T2DM) incidence is traditionally derived from cohort studies that are not always feasible, representative, or available. The present study estimates T2DM incidence in Fijian adults from T2DM prevalence estimates assembled from surveys of 25-64 year old adults conducted over 30 years (n = 14,288). T2DM prevalence by five-year age group from five population-based risk factor surveys conducted over 1980-2011 were variously adjusted for urban-rural residency, ethnicity, and sex to previous censuses (1976, 1986, 1996, 2009) to improve representativeness. Prevalence estimates were then used to calculate T2DM incidence based on birth cohorts from the age-period (Lexis) matrix following the Styblo technique, first used to estimate annual risk of tuberculosis infection (incidence) from sequential Mantoux population surveys. Poisson regression of year, age, sex, and ethnicity strata (n = 160) was used to develop projections of T2DM prevalence and incidence to 2020 based on various scenarios of population weight measured by body mass index (BMI) change. T2DM prevalence and annual incidence increased in Fiji over 1980-2011. Prevalence was higher in Indians and men than i-Taukei and women. Incidence was higher in Indians and women. From regression analyses, absolute reductions of 2.6 to 5.1% in T2DM prevalence (13-26% lower), and 0.5-0.9 per 1000 person-years in incidence (8-14% lower), could be expected in 2020 in adults if mean population weight could be reduced by 1-4 kg, compared to the current period trend in weight gain. This is the first application of the Styblo technique to calculate T2DM incidence from population-based prevalence surveys over time. Reductions in population BMI are predicted to reduce T2DM incidence and prevalence in Fiji among adults aged 25-64 years.

  8. The importance of estimating selection bias on prevalence estimates shortly after a disaster.

    NARCIS (Netherlands)

    Grievink, Linda; Velden, Peter G van der; Yzermans, C Joris; Roorda, Jan; Stellato, Rebecca K

    2006-01-01

    PURPOSE: The aim was to study selective participation and its effect on prevalence estimates in a health survey of affected residents 3 weeks after a man-made disaster in The Netherlands (May 13, 2000). METHODS: All affected adult residents were invited to participate. Survey (questionnaire) data

  9. The importance of estimating selection bias on prevalence estimates, shortly after a disaster.

    NARCIS (Netherlands)

    Grievink, L.; Velden, P.G. van der; Yzermans, C.J.; Roorda, J.; Stellato, R.K.

    2006-01-01

    PURPOSE: The aim was to study selective participation and its effect on prevalence estimates in a health survey of affected residents 3 weeks after a man-made disaster in The Netherlands (May 13, 2000). METHODS: All affected adult residents were invited to participate. Survey (questionnaire) data

  10. Issues in environmental survey design

    International Nuclear Information System (INIS)

    Iachan, R.

    1989-01-01

    Several environmental survey design issues are discussed and illustrated with surveys designed by Research Triangle Institute statisticians. Issues related to sampling and nonsampling errors are illustrated for indoor air quality surveys, radon surveys, pesticide surveys, and occupational and personal exposure surveys. Sample design issues include the use of auxiliary information (e.g. for stratification), and sampling in time. We also discuss the reduction and estimation of nonsampling errors, including nonresponse and measurement bias

  11. Arecibo pulsar survey using ALFA. III. Precursor survey and population synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Swiggum, J. K.; Lorimer, D. R.; McLaughlin, M. A.; Bates, S. D.; Senty, T. R. [Department of Physics and Astronomy, West Virginia University, Morgantown, WV 26506 (United States); Champion, D. J.; Lazarus, P. [Max-Planck-Institut für Radioastronomie, D-53121 Bonn (Germany); Ransom, S. M. [NRAO, Charlottesville, VA 22903 (United States); Brazier, A.; Chatterjee, S.; Cordes, J. M. [Astronomy Department, Cornell University, Ithaca, NY 14853 (United States); Hessels, J. W. T. [ASTRON, Netherlands Institute for Radio Astronomy, Postbus 2, 7990 AA, Dwingeloo (Netherlands); Nice, D. J. [Department of Physics, Lafayette College, Easton, PA 18042 (United States); Ellis, J.; Allen, B. [Physics Department, University of Wisconsin-Milwaukee, Milwaukee WI 53211 (United States); Bhat, N. D. R. [Center for Astrophysics and Supercomputing, Swinburne University, Hawthorn, Victoria 3122 (Australia); Bogdanov, S.; Camilo, F. [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Crawford, F. [Department of Physics and Astronomy, Franklin and Marshall College, Lancaster, PA 17604-3003 (United States); Deneva, J. S. [Arecibo Observatory, HC3 Box 53995, Arecibo, PR 00612 (United States); and others

    2014-06-01

    The Pulsar Arecibo L-band Feed Array (PALFA) Survey uses the ALFA 7-beam receiver to search both inner and outer Galactic sectors visible from Arecibo (32° ≲ ℓ ≲ 77° and 168° ≲ ℓ ≲ 214°) close to the Galactic plane (|b| ≲ 5°) for pulsars. The PALFA survey is sensitive to sources fainter and more distant than have previously been seen because of Arecibo's unrivaled sensitivity. In this paper we detail a precursor survey of this region with PALFA, which observed a subset of the full region (slightly more restrictive in ℓ and |b| ≲ 1°) and detected 45 pulsars. Detections included 1 known millisecond pulsar and 11 previously unknown, long-period pulsars. In the surveyed part of the sky that overlaps with the Parkes Multibeam Pulsar Survey (36° ≲ ℓ ≲ 50°), PALFA is probing deeper than the Parkes survey, with four discoveries in this region. For both Galactic millisecond and normal pulsar populations, we compare the survey's detections with simulations to model these populations and, in particular, to estimate the number of observable pulsars in the Galaxy. We place 95% confidence intervals of 82,000 to 143,000 on the number of detectable normal pulsars and 9000 to 100,000 on the number of detectable millisecond pulsars in the Galactic disk. These are consistent with previous estimates. Given the most likely population size in each case (107,000 and 15,000 for normal and millisecond pulsars, respectively), we extend survey detection simulations to predict that, when complete, the full PALFA survey should have detected 1000{sub −230}{sup +330} normal pulsars and 30{sub −20}{sup +200} millisecond pulsars. Identical estimation techniques predict that 490{sub −115}{sup +160} normal pulsars and 12{sub −5}{sup +70} millisecond pulsars would be detected by the beginning of 2014; at the time, the PALFA survey had detected 283 normal pulsars and 31 millisecond pulsars, respectively. We attribute the deficiency in normal pulsar

  12. Refusal bias in the estimation of HIV prevalence

    NARCIS (Netherlands)

    Janssens, Wendy; van der Gaag, Jacques; Rinke de Wit, Tobias F.; Tanović, Zlata

    2014-01-01

    In 2007, UNAIDS corrected estimates of global HIV prevalence downward from 40 million to 33 million based on a methodological shift from sentinel surveillance to population-based surveys. Since then, population-based surveys are considered the gold standard for estimating HIV prevalence. However,

  13. A comparison of dietary estimates from the National Aboriginal and Torres Strait Islander Health Survey to food and beverage purchase data.

    Science.gov (United States)

    McMahon, Emma; Wycherley, Thomas; O'Dea, Kerin; Brimblecombe, Julie

    2017-12-01

    We compared self-reported dietary intake from the very remote sample of the National Aboriginal and Torres Strait Islander Nutrition and Physical Activity Survey (VR-NATSINPAS; n=1,363) to one year of food and beverage purchases from 20 very remote Indigenous Australian communities (servicing ∼8,500 individuals). Differences in food (% energy from food groups) and nutrients were analysed using t-test with unequal variance. Per-capita energy estimates were not significantly different between the surveys (899 MJ/person/day [95% confidence interval -152,1950] p=0.094). Self-reported intakes of sugar, cereal products/dishes, beverages, fats/oils, milk products/dishes and confectionery were significantly lower than that purchased, while intakes of meat, vegetables, cereal-based dishes, fish, fruit and eggs were significantly higher (pfood and nutrient availability in this population longitudinally; however, further evidence is needed on approaches to estimate wastage and foods sourced outside the store. There is potential for these data to complement each other to inform nutrition policies and programs in this population. © 2017 Menzies School of Health Research.

  14. Multiple imputation to account for missing data in a survey: estimating the prevalence of osteoporosis.

    Science.gov (United States)

    Kmetic, Andrew; Joseph, Lawrence; Berger, Claudie; Tenenhouse, Alan

    2002-07-01

    Nonresponse bias is a concern in any epidemiologic survey in which a subset of selected individuals declines to participate. We reviewed multiple imputation, a widely applicable and easy to implement Bayesian methodology to adjust for nonresponse bias. To illustrate the method, we used data from the Canadian Multicentre Osteoporosis Study, a large cohort study of 9423 randomly selected Canadians, designed in part to estimate the prevalence of osteoporosis. Although subjects were randomly selected, only 42% of individuals who were contacted agreed to participate fully in the study. The study design included a brief questionnaire for those invitees who declined further participation in order to collect information on the major risk factors for osteoporosis. These risk factors (which included age, sex, previous fractures, family history of osteoporosis, and current smoking status) were then used to estimate the missing osteoporosis status for nonparticipants using multiple imputation. Both ignorable and nonignorable imputation models are considered. Our results suggest that selection bias in the study is of concern, but only slightly, in very elderly (age 80+ years), both women and men. Epidemiologists should consider using multiple imputation more often than is current practice.

  15. Estimating daily minimum, maximum, and mean near surface air temperature using hybrid satellite models across Israel.

    Science.gov (United States)

    Rosenfeld, Adar; Dorman, Michael; Schwartz, Joel; Novack, Victor; Just, Allan C; Kloog, Itai

    2017-11-01

    Meteorological stations measure air temperature (Ta) accurately with high temporal resolution, but usually suffer from limited spatial resolution due to their sparse distribution across rural, undeveloped or less populated areas. Remote sensing satellite-based measurements provide daily surface temperature (Ts) data in high spatial and temporal resolution and can improve the estimation of daily Ta. In this study we developed spatiotemporally resolved models which allow us to predict three daily parameters: Ta Max (day time), 24h mean, and Ta Min (night time) on a fine 1km grid across the state of Israel. We used and compared both the Aqua and Terra MODIS satellites. We used linear mixed effect models, IDW (inverse distance weighted) interpolations and thin plate splines (using a smooth nonparametric function of longitude and latitude) to first calibrate between Ts and Ta in those locations where we have available data for both and used that calibration to fill in neighboring cells without surface monitors or missing Ts. Out-of-sample ten-fold cross validation (CV) was used to quantify the accuracy of our predictions. Our model performance was excellent for both days with and without available Ts observations for both Aqua and Terra (CV Aqua R 2 results for min 0.966, mean 0.986, and max 0.967; CV Terra R 2 results for min 0.965, mean 0.987, and max 0.968). Our research shows that daily min, mean and max Ta can be reliably predicted using daily MODIS Ts data even across Israel, with high accuracy even for days without Ta or Ts data. These predictions can be used as three separate Ta exposures in epidemiology studies for better diurnal exposure assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. The effect of non-response on estimates of health care utilisation

    DEFF Research Database (Denmark)

    Gundgaard, Jens; Ekholm, Orla; Hansen, Ebba Holme

    2008-01-01

    BACKGROUND: Non-response in health surveys may lead to bias in estimates of health care utilisation. The magnitude, direction and composition of the bias are usually not well known. When data from health surveys are merged with data from registers at the individual level, analyses can reveal non......-response bias. Our aim was to estimate the composition, direction and magnitude of non-response bias in the estimation of health care costs in two types of health interview surveys. METHODS: The surveys were (1) a national personal interview survey of 22 484 Danes (2) a telephone interview survey of 5000 Danes...... living in Funen County. Data were linked with register information on health care utilisation in hospitals and primary care. Health care utilisation was estimated for respondents and non-respondents, and the difference was explained by a decomposition method of bias components. RESULTS: The surveys...

  17. Comparison of Reef Fish Survey Data Gathered by Open and Closed Circuit SCUBA Divers Reveals Differences in Areas With Higher Fishing Pressure.

    Directory of Open Access Journals (Sweden)

    Andrew E Gray

    Full Text Available Visual survey by divers using open-circuit (OC SCUBA is the most widely used approach to survey coral reef fishes. Therefore, it is important to quantify sources of bias in OC surveys, such as the possibility that avoidance of OC divers by fishes can lead to undercounting in areas where targeted species have come to associate divers with a risk of being speared. One potential way to reduce diver avoidance is to utilize closed circuit rebreathers (CCRs, which do not produce the noise and bubbles that are a major source of disturbance associated with OC diving. For this study, we conducted 66 paired OC and CCR fish surveys in the Main Hawaiian Islands at locations with relatively high, moderate, and light fishing pressure. We found no significant differences in biomass estimates between OC and CCR surveys when data were pooled across all sites, however there were differences at the most heavily fished location, Oahu. There, biomass estimates from OC divers were significantly lower for several targeted fish groups, including surgeonfishes, targeted wrasses, and snappers, as well as for all targeted fishes combined, with mean OC biomass between 32 and 68% of mean CCR biomass. There were no clear differences between OC and CCR biomass estimates for these groups at sites with moderate or low fishing pressure, or at any location for other targeted fish groups, including groupers, parrotfishes, and goatfishes. Bias associated with avoidance of OC divers at heavily fished locations could be substantially reduced, or at least calibrated for, by utilization of CCR. In addition to being affected by fishing pressure, the extent to which avoidance of OC divers is problematic for visual surveys varies greatly among taxa, and is likely to be highly influenced by the survey methodology and dimensions used.

  18. Salton Trough regional deformation estimated from combined trilateration and survey-mode GPS data

    Science.gov (United States)

    Anderson, G.; Agnew, D.C.; Johnson, H.O.

    2003-01-01

    The Salton Trough in southeastern California, United States, has one of the highest seismicity and deformation rates in southern California, including 20 earthquakes M 6 or larger since 1892. From 1972 through 1987, the U.S. Geological Survey (USGS) measured a 41-station trilateration network in this region. We remeasured 37 of the USGS baselines using survey-mode Global Positioning System methods from 1995 through 1999. We estimate the Salton Trough deformation field over a nearly 30-year period through combined analysis of baseline length time series from these two datasets. Our primary result is that strain accumulation has been steady over our observation span, at a resolution of about 0.05 ??strain/yr at 95% confidence, with no evidence for significant long-term strain transients despite the occurrence of seven large regional earthquakes during our observation period. Similar to earlier studies, we find that the regional strain field is consistent with 0.5 ?? 0.03 ??strain/yr total engineering shear strain along an axis oriented 311.6?? ?? 23?? east of north, approximately parallel to the strike of the major regional faults, the San Andreas and San Jacinto (all uncertainties in the text and tables are standard deviations unless otherwise noted). We also find that (1) the shear strain rate near the San Jacinto fault is at least as high as it is near the San Andreas fault, (2) the areal dilatation near the southeastern Salton Sea is significant, and (3) one station near the southeastern Salton Sea moved anomalously during the period 1987.95-1995.11.

  19. Binational Arsenic Exposure Survey: Methodology and Estimated Arsenic Intake from Drinking Water and Urinary Arsenic Concentrations

    Directory of Open Access Journals (Sweden)

    Robin B. Harris

    2012-03-01

    Full Text Available The Binational Arsenic Exposure Survey (BAsES was designed to evaluate probable arsenic exposures in selected areas of southern Arizona and northern Mexico, two regions with known elevated levels of arsenic in groundwater reserves. This paper describes the methodology of BAsES and the relationship between estimated arsenic intake from beverages and arsenic output in urine. Households from eight communities were selected for their varying groundwater arsenic concentrations in Arizona, USA and Sonora, Mexico. Adults responded to questionnaires and provided dietary information. A first morning urine void and water from all household drinking sources were collected. Associations between urinary arsenic concentration (total, organic, inorganic and estimated level of arsenic consumed from water and other beverages were evaluated through crude associations and by random effects models. Median estimated total arsenic intake from beverages among participants from Arizona communities ranged from 1.7 to 14.1 µg/day compared to 0.6 to 3.4 µg/day among those from Mexico communities. In contrast, median urinary inorganic arsenic concentrations were greatest among participants from Hermosillo, Mexico (6.2 µg/L whereas a high of 2.0 µg/L was found among participants from Ajo, Arizona. Estimated arsenic intake from drinking water was associated with urinary total arsenic concentration (p < 0.001, urinary inorganic arsenic concentration (p < 0.001, and urinary sum of species (p < 0.001. Urinary arsenic concentrations increased between 7% and 12% for each one percent increase in arsenic consumed from drinking water. Variability in arsenic intake from beverages and urinary arsenic output yielded counter intuitive results. Estimated intake of arsenic from all beverages was greatest among Arizonans yet participants in Mexico had higher urinary total and inorganic arsenic concentrations. Other contributors to urinary arsenic concentrations should be evaluated.

  20. An emperor penguin population estimate: the first global, synoptic survey of a species from space.

    Science.gov (United States)

    Fretwell, Peter T; Larue, Michelle A; Morin, Paul; Kooyman, Gerald L; Wienecke, Barbara; Ratcliffe, Norman; Fox, Adrian J; Fleming, Andrew H; Porter, Claire; Trathan, Phil N

    2012-01-01

    Our aim was to estimate the population of emperor penguins (Aptenodytes fosteri) using a single synoptic survey. We examined the whole continental coastline of Antarctica using a combination of medium resolution and Very High Resolution (VHR) satellite imagery to identify emperor penguin colony locations. Where colonies were identified, VHR imagery was obtained in the 2009 breeding season. The remotely-sensed images were then analysed using a supervised classification method to separate penguins from snow, shadow and guano. Actual counts of penguins from eleven ground truthing sites were used to convert these classified areas into numbers of penguins using a robust regression algorithm.We found four new colonies and confirmed the location of three previously suspected sites giving a total number of emperor penguin breeding colonies of 46. We estimated the breeding population of emperor penguins at each colony during 2009 and provide a population estimate of ~238,000 breeding pairs (compared with the last previously published count of 135,000-175,000 pairs). Based on published values of the relationship between breeders and non-breeders, this translates to a total population of ~595,000 adult birds.There is a growing consensus in the literature that global and regional emperor penguin populations will be affected by changing climate, a driver thought to be critical to their future survival. However, a complete understanding is severely limited by the lack of detailed knowledge about much of their ecology, and importantly a poor understanding of their total breeding population. To address the second of these issues, our work now provides a comprehensive estimate of the total breeding population that can be used in future population models and will provide a baseline for long-term research.

  1. Using open-ended data to enrich survey results on the meanings of self-rated health: a study among women in underprivileged communities in Beirut, Lebanon.

    Science.gov (United States)

    Salem, Mylene Tewtel; Abdulrahim, Sawsan; Zurayk, Huda

    2009-12-01

    This study extends the debate on self-rated health by using different sources of data in the same study to explore the meanings of self-rated health among women who live in socio-economically disadvantaged communities in Beirut, Lebanon. Using data from the Urban Health Study, a cross-sectional household survey of 1,869 women between 15 and 59 years of age, multiple logistic regression models were developed to assess factors associated with self-rated health. Also, open-ended data was used to analyze women's explanations of their self-rated health ratings. Self-rated health was found to be a complex concept, associated not only with physical health but also with a combination of social, psychological, and behavioral factors. This open-ended analysis revealed new meanings of self-rated health that are often not included in self-rated health epidemiologic research, such as women's experiences with pain and fatigue, as well as exposure to financial stressors and the legacy of wars. We argue that triangulating survey and open-ended data provides a better understanding of the context-specific social and cultural meanings of self-rated health.

  2. Estimation of lead, cadmium and nickel content by means of Atomic Absorption Spectroscopy in dry fruit bodies of some macromycetes growing in Poland. II.

    Directory of Open Access Journals (Sweden)

    Jan Grzybek

    2014-08-01

    Full Text Available The content of lead, cadmium, and nickel in dry fruit bodies of 34 species of macromyoetes collected in Poland from 72 natural babitats by means of Atomic Absorption Spectroscopy (AAS was estimated.

  3. Means of Hilbert space operators

    CERN Document Server

    Hiai, Fumio

    2003-01-01

    The monograph is devoted to a systematic study of means of Hilbert space operators by a unified method based on the theory of double integral transformations and Peller's characterization of Schur multipliers. General properties on means of operators such as comparison results, norm estimates and convergence criteria are established. After some general theory, special investigations are focused on three one-parameter families of A-L-G (arithmetic-logarithmic-geometric) interpolation means, Heinz-type means and binomial means. In particular, norm continuity in the parameter is examined for such means. Some necessary technical results are collected as appendices.

  4. Using the Superpopulation Model for Imputations and Variance Computation in Survey Sampling

    Directory of Open Access Journals (Sweden)

    Petr Novák

    2012-03-01

    Full Text Available This study is aimed at variance computation techniques for estimates of population characteristics based on survey sampling and imputation. We use the superpopulation regression model, which means that the target variable values for each statistical unit are treated as random realizations of a linear regression model with weighted variance. We focus on regression models with one auxiliary variable and no intercept, which have many applications and straightforward interpretation in business statistics. Furthermore, we deal with caseswhere the estimates are not independent and thus the covariance must be computed. We also consider chained regression models with auxiliary variables as random variables instead of constants.

  5. Increased Statistical Efficiency in a Lognormal Mean Model

    Directory of Open Access Journals (Sweden)

    Grant H. Skrepnek

    2014-01-01

    Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.

  6. Use of the robust design to estimate seasonal abundance and demographic parameters of a coastal bottlenose dolphin (Tursiops aduncus population.

    Directory of Open Access Journals (Sweden)

    Holly C Smith

    Full Text Available As delphinid populations become increasingly exposed to human activities we rely on our capacity to produce accurate abundance estimates upon which to base management decisions. This study applied mark-recapture methods following the Robust Design to estimate abundance, demographic parameters, and temporary emigration rates of an Indo-Pacific bottlenose dolphin (Tursiops aduncus population off Bunbury, Western Australia. Boat-based photo-identification surveys were conducted year-round over three consecutive years along pre-determined transect lines to create a consistent sampling effort throughout the study period and area. The best fitting capture-recapture model showed a population with a seasonal Markovian temporary emigration with time varying survival and capture probabilities. Abundance estimates were seasonally dependent with consistently lower numbers obtained during winter and higher during summer and autumn across the three-year study period. Specifically, abundance estimates for all adults and juveniles (combined varied from a low of 63 (95% CI 59 to 73 in winter of 2007 to a high of 139 (95% CI 134 to148 in autumn of 2009. Temporary emigration rates (γ' for animals absent in the previous period ranged from 0.34 to 0.97 (mean  =  0.54; ±SE 0.11 with a peak during spring. Temporary emigration rates for animals present during the previous period (γ'' were lower, ranging from 0.00 to 0.29, with a mean of 0.16 (± SE 0.04. This model yielded a mean apparent survival estimate for juveniles and adults (combined of 0.95 (± SE 0.02 and a capture probability from 0.07 to 0.51 with a mean of 0.30 (± SE 0.04. This study demonstrates the importance of incorporating temporary emigration to accurately estimate abundance of coastal delphinids. Temporary emigration rates were high in this study, despite the large area surveyed, indicating the challenges of sampling highly mobile animals which range over large spatial areas.

  7. Estimation solar irradiance maps in Extremadura (Spain) by means of meteorological parameters; Mapas de radiacion solar de Extremadura estimada a partir de otros parametros meteorologicos

    Energy Technology Data Exchange (ETDEWEB)

    Ramiro, A.; Nunez, M.; Reyes, J. J.; Gonzalez, J. F.; Sabio, E.; Gonzalez-Garcia, C. M.; Ganan, J.; Roman, S.

    2004-07-01

    In a previous work, we have found correlation expressions that permit to estimate the mean monthly values of daily diffuse and direct solar irradiation on a horizontal surface in function of some weather parameters. In this work, the incident radiation on a horizontal surface has been estimated in thirty zones of Extremadura by means of weather data from existing stations located in these zones and its orography. The weather data used have been the monthly average values of the highest temperatures and the sunshine fraction. These monthly average values have been obtained from measurements carried out in the weather stations during the period 1985-2002. The results are presented as interactive maps in Arc view language, associated to a conventional data base. (Author)

  8. Epidemiology from Tweets: Estimating Misuse of Prescription Opioids in the USA from Social Media.

    Science.gov (United States)

    Chary, Michael; Genes, Nicholas; Giraud-Carrier, Christophe; Hanson, Carl; Nelson, Lewis S; Manini, Alex F

    2017-12-01

    The misuse of prescription opioids (MUPO) is a leading public health concern. Social media are playing an expanded role in public health research, but there are few methods for estimating established epidemiological metrics from social media. The purpose of this study was to demonstrate that the geographic variation of social media posts mentioning prescription opioid misuse strongly correlates with government estimates of MUPO in the last month. We wrote software to acquire publicly available tweets from Twitter from 2012 to 2014 that contained at least one keyword related to prescription opioid use (n = 3,611,528). A medical toxicologist and emergency physician curated the list of keywords. We used the semantic distance (SemD) to automatically quantify the similarity of meaning between tweets and identify tweets that mentioned MUPO. We defined the SemD between two words as the shortest distance between the two corresponding word-centroids. Each word-centroid represented all recognized meanings of a word. We validated this automatic identification with manual curation. We used Twitter metadata to estimate the location of each tweet. We compared our estimated geographic distribution with the 2013-2015 National Surveys on Drug Usage and Health (NSDUH). Tweets that mentioned MUPO formed a distinct cluster far away from semantically unrelated tweets. The state-by-state correlation between Twitter and NSDUH was highly significant across all NSDUH survey years. The correlation was strongest between Twitter and NSDUH data from those aged 18-25 (r = 0.94, p usage. Mentions of MUPO on Twitter correlate strongly with state-by-state NSDUH estimates of MUPO. We have also demonstrated that a natural language processing can be used to analyze social media to provide insights for syndromic toxicosurveillance.

  9. Travel Time Estimation on Urban Street Segment

    Directory of Open Access Journals (Sweden)

    Jelena Kajalić

    2018-02-01

    Full Text Available Level of service (LOS is used as the main indicator of transport quality on urban roads and it is estimated based on the travel speed. The main objective of this study is to determine which of the existing models for travel speed calculation is most suitable for local conditions. The study uses actual data gathered in travel time survey on urban streets, recorded by applying second by second GPS data. The survey is limited to traffic flow in saturated conditions. The RMSE method (Root Mean Square Error is used for research results comparison with relevant models: Akcelik, HCM (Highway Capacity Manual, Singapore model and modified BPR (the Bureau of Public Roads function (Dowling - Skabardonis. The lowest deviation in local conditions for urban streets with standardized intersection distance (400-500 m is demonstrated by Akcelik model. However, for streets with lower signal density (<1 signal/km the correlation between speed and degree of saturation is best presented by HCM and Singapore model. According to test results, Akcelik model was adopted for travel speed estimation which can be the basis for determining the level of service in urban streets with standardized intersection distance and coordinated signal timing under local conditions.

  10. Optimizing occupational exposure measurement strategies when estimating the log-scale arithmetic mean value--an example from the reinforced plastics industry.

    Science.gov (United States)

    Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A

    2006-06-01

    When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.

  11. Cost-effective sampling of 137Cs-derived net soil redistribution: part 1 – estimating the spatial mean across scales of variation

    International Nuclear Information System (INIS)

    Li, Y.; Chappell, A.; Nyamdavaa, B.; Yu, H.; Davaasuren, D.; Zoljargal, K.

    2015-01-01

    redistribution across scales of variation. • Cost-effective sampling was compared using a case study from the Chinese Loess Plateau. • We recommend estimating the spatial mean using innovative sampling design

  12. Estimating the Incidence of Acute Infectious Intestinal Disease in the Community in the UK: A Retrospective Telephone Survey.

    Directory of Open Access Journals (Sweden)

    Laura Viviani

    Full Text Available To estimate the burden of intestinal infectious disease (IID in the UK and determine whether disease burden estimations using a retrospective study design differ from those using a prospective study design.A retrospective telephone survey undertaken in each of the four countries comprising the United Kingdom. Participants were randomly asked about illness either in the past 7 or 28 days.14,813 individuals for all of whom we had a legible recording of their agreement to participate.Self-reported IID, defined as loose stools or clinically significant vomiting lasting less than two weeks, in the absence of a known non-infectious cause.The rate of self-reported IID varied substantially depending on whether asked for illness in the previous 7 or 28 days. After standardising for age and sex, and adjusting for the number of interviews completed each month and the relative size of each UK country, the estimated rate of IID in the 7-day recall group was 1,530 cases per 1,000 person-years (95% CI: 1135-2113, while in the 28-day recall group it was 533 cases per 1,000 person-years (95% CI: 377-778. There was no significant variation in rates between the four countries. Rates in this study were also higher than in a related prospective study undertaken at the same time.The estimated burden of disease from IID varied dramatically depending on study design. Retrospective studies of IID give higher estimates of disease burden than prospective studies. Of retrospective studies longer recall periods give lower estimated rates than studies with short recall periods. Caution needs to be exercised when comparing studies of self-reported IID as small changes in study design or case definition can markedly affect estimated rates.

  13. The Value Of Enhanced Neo Surveys

    Science.gov (United States)

    Harris, Alan W.

    2012-10-01

    NEO surveys have now achieved, more or less, the “Spaceguard Goal” of cataloging 90% of NEAs larger than 1 km in diameter, and thereby have reduced the short-term hazard from cosmic impacts by about an order of magnitude, from an actuarial estimate of 1,000 deaths per year (actually about a billion every million years, with very little in between), to about 100 deaths per year, with a shift toward smaller but more frequent events accounting for the remaining risk. It is fair to ask, then, what is the value of a next-generation accelerated survey to “retire” much of the remaining risk. The curve of completion of survey versus size of NEA is remarkably similar for any survey, ground or space based, visible light or thermal IR, so it is possible to integrate risk over all sizes, with a time variable curve of completion to evaluate the actuarial value of speeding up survey completion. I will present my latest estimate of NEA population and completion of surveys. From those I will estimate the “value” of accelerated surveys such as Pan-STARRS, LSST, or space-based surveys, versus continuing with current surveys. My tentative conclusion is that we may have already reached the point in terms of cost-benefit where accelerated surveys are not cost-effective in terms of reducing impact risk. If not yet, we soon will. On the other hand, the surveys, which find and catalog main-belt and other classes of small bodies as well as NEOs, have provided a gold mine of good science. The scientific value of continued or accelerated surveys needs to be emphasized as the impact risk is increasingly “retired.”

  14. Using kernel density estimates to investigate lymphatic filariasis in northeast Brazil

    Science.gov (United States)

    Medeiros, Zulma; Bonfim, Cristine; Brandão, Eduardo; Netto, Maria José Evangelista; Vasconcellos, Lucia; Ribeiro, Liany; Portugal, José Luiz

    2012-01-01

    After more than 10 years of the Global Program to Eliminate Lymphatic Filariasis (GPELF) in Brazil, advances have been seen, but the endemic disease persists as a public health problem. The aim of this study was to describe the spatial distribution of lymphatic filariasis in the municipality of Jaboatão dos Guararapes, Pernambuco, Brazil. An epidemiological survey was conducted in the municipality, and positive filariasis cases identified in this survey were georeferenced in point form, using the GPS. A kernel intensity estimator was applied to identify clusters with greater intensity of cases. We examined 23 673 individuals and 323 individuals with microfilaremia were identified, representing a mean prevalence rate of 1.4%. Around 88% of the districts surveyed presented cases of filarial infection, with prevalences of 0–5.6%. The male population was more affected by the infection, with 63.8% of the cases (P<0.005). Positive cases were found in all age groups examined. The kernel intensity estimator identified the areas of greatest intensity and least intensity of filarial infection cases. The case distribution was heterogeneous across the municipality. The kernel estimator identified spatial clusters of cases, thus indicating locations with greater intensity of transmission. The main advantage of this type of analysis lies in its ability to rapidly and easily show areas with the highest concentration of cases, thereby contributing towards planning, monitoring, and surveillance of filariasis elimination actions. Incorporation of geoprocessing and spatial analysis techniques constitutes an important tool for use within the GPELF. PMID:22943547

  15. Using data from a behavioural survey of men who have sex with men (MSM) to estimate the number likely to present for HIV pre-exposure prophylaxis (PrEP) in Ireland, 2017.

    Science.gov (United States)

    Nic Lochlainn, Laura; O'Donnell, Kate; Hurley, Caroline; Lyons, Fiona; Igoe, Derval

    2017-11-01

    In Ireland, men who have sex with men (MSM) have increased HIV risk. Pre-exposure prophylaxis (PrEP), combined with safe sex practices, can reduce HIV acquisition. We estimated MSM numbers likely to present for PrEP by applying French PrEP criteria to Irish MSM behavioural survey data. We adjusted for survey bias, calculated proportions accessing testing services and those likely to take PrEP. We estimated 1-3% of MSM in Ireland were likely to present for PrEP.

  16. Nonprobability Web Surveys to Measure Sexual Behaviors and Attitudes in the General Population: A Comparison With a Probability Sample Interview Survey

    Science.gov (United States)

    Burkill, Sarah; Couper, Mick P; Conrad, Frederick; Clifton, Soazig; Tanton, Clare; Phelps, Andrew; Datta, Jessica; Mercer, Catherine H; Sonnenberg, Pam; Prah, Philip; Mitchell, Kirstin R; Wellings, Kaye; Johnson, Anne M; Copas, Andrew J

    2014-01-01

    Background Nonprobability Web surveys using volunteer panels can provide a relatively cheap and quick alternative to traditional health and epidemiological surveys. However, concerns have been raised about their representativeness. Objective The aim was to compare results from different Web panels with a population-based probability sample survey (n=8969 aged 18-44 years) that used computer-assisted self-interview (CASI) for sensitive behaviors, the third British National Survey of Sexual Attitudes and Lifestyles (Natsal-3). Methods Natsal-3 questions were included on 4 nonprobability Web panel surveys (n=2000 to 2099), 2 using basic quotas based on age and sex, and 2 using modified quotas based on additional variables related to key estimates. Results for sociodemographic characteristics were compared with external benchmarks and for sexual behaviors and opinions with Natsal-3. Odds ratios (ORs) were used to express differences between the benchmark data and each survey for each variable of interest. A summary measure of survey performance was the average absolute OR across variables. Another summary measure was the number of key estimates for which the survey differed significantly (at the 5% level) from the benchmarks. Results For sociodemographic variables, the Web surveys were less representative of the general population than Natsal-3. For example, for men, the average absolute OR for Natsal-3 was 1.14, whereas for the Web surveys the average absolute ORs ranged from 1.86 to 2.30. For all Web surveys, approximately two-thirds of the key estimates of sexual behaviors were different from Natsal-3 and the average absolute ORs ranged from 1.32 to 1.98. Differences were appreciable even for questions asked by CASI in Natsal-3. No single Web survey performed consistently better than any other did. Modified quotas slightly improved results for men, but not for women. Conclusions Consistent with studies from other countries on less sensitive topics, volunteer Web

  17. Estimating population salt intake in India using spot urine samples.

    Science.gov (United States)

    Petersen, Kristina S; Johnson, Claire; Mohan, Sailesh; Rogers, Kris; Shivashankar, Roopa; Thout, Sudhir Raj; Gupta, Priti; He, Feng J; MacGregor, Graham A; Webster, Jacqui; Santos, Joseph Alvin; Krishnan, Anand; Maulik, Pallab K; Reddy, K Srinath; Gupta, Ruby; Prabhakaran, Dorairaj; Neal, Bruce

    2017-11-01

    To compare estimates of mean population salt intake in North and South India derived from spot urine samples versus 24-h urine collections. In a cross-sectional survey, participants were sampled from slum, urban and rural communities in North and in South India. Participants provided 24-h urine collections, and random morning spot urine samples. Salt intake was estimated from the spot urine samples using a series of established estimating equations. Salt intake data from the 24-h urine collections and spot urine equations were weighted to provide estimates of salt intake for Delhi and Haryana, and Andhra Pradesh. A total of 957 individuals provided a complete 24-h urine collection and a spot urine sample. Weighted mean salt intake based on the 24-h urine collection, was 8.59 (95% confidence interval 7.73-9.45) and 9.46 g/day (8.95-9.96) in Delhi and Haryana, and Andhra Pradesh, respectively. Corresponding estimates based on the Tanaka equation [9.04 (8.63-9.45) and 9.79 g/day (9.62-9.96) for Delhi and Haryana, and Andhra Pradesh, respectively], the Mage equation [8.80 (7.67-9.94) and 10.19 g/day (95% CI 9.59-10.79)], the INTERSALT equation [7.99 (7.61-8.37) and 8.64 g/day (8.04-9.23)] and the INTERSALT equation with potassium [8.13 (7.74-8.52) and 8.81 g/day (8.16-9.46)] were all within 1 g/day of the estimate based upon 24-h collections. For the Toft equation, estimates were 1-2 g/day higher [9.94 (9.24-10.64) and 10.69 g/day (9.44-11.93)] and for the Kawasaki equation they were 3-4 g/day higher [12.14 (11.30-12.97) and 13.64 g/day (13.15-14.12)]. In urban and rural areas in North and South India, most spot urine-based equations provided reasonable estimates of mean population salt intake. Equations that did not provide good estimates may have failed because specimen collection was not aligned with the original method.

  18. A radon survey performed in caves in Slovenia

    International Nuclear Information System (INIS)

    Jovanovic, P.

    2002-01-01

    A survey of radon and radon decay product concentrations in several caves in a limestone region in Slovenia was initiated in 1986. In the period from 1989 to 1998, monthly surveys were undertaken in several caves which are open to tourists or used for speleotherapy purposes. The reason for carrying out these surveys, were dose estimates obtained for the guides and medical staff working in the caves. Daily average radon gas concentration determined ranged from several 100 Bq/m 3 up to 27 kBq/m 3 . Higher values were measured in the summer period. The equilibrium factors derived ranged from 0.05 to 0.89, with the higher values being measured in the winter period in vertical caves. In horizontal caves (with two entrances located opposite one another) these values ranged between 0.55 and 0.89. Annual doses estimated on the basis of various lung models ranged from 10 mSv to 85 mSv per year and per 2000 working hours. A significant difference was observed between the doses estimated by means of dosimetric models, and those estimated on the basis of the epidemiological model presented in ICRP 65. The value for the unattached fraction indicated in ICRP 65 is about 3%, but our measurements performed in the caves yielded higher values of up to 15%, with this highest value being determined in the Postojna cave. In the coming years, we will perform measurements to obtain the values for concentrations of unattached particles of radon daughters and values for particle-size distribution in the 3 different caves with the highest occupancy times for visitors. There are no regulations in force in Slovenia affecting exposures to elevated radon and radon daughter concentrations among underground workers. The health inspectorate can impose radiation monitoring measures for the purposes of performing dose calculations for underground workers. The results from such monitoring measures will contribute to the establishment of an ordinance regulating the performance of measurements at low

  19. Estimating River Surface Elevation From ArcticDEM

    Science.gov (United States)

    Dai, Chunli; Durand, Michael; Howat, Ian M.; Altenau, Elizabeth H.; Pavelsky, Tamlin M.

    2018-04-01

    ArcticDEM is a collection of 2-m resolution, repeat digital surface models created from stereoscopic satellite imagery. To demonstrate the potential of ArcticDEM for measuring river stages and discharges, we estimate river surface heights along a reach of Tanana River near Fairbanks, Alaska, by the precise detection of river shorelines and mapping of shorelines to land surface elevation. The river height profiles over a 15-km reach agree with in situ measurements to a standard deviation less than 30 cm. The time series of ArcticDEM-derived river heights agree with the U.S. Geological Survey gage measurements with a standard deviation of 32 cm. Using the rating curve for that gage, we obtain discharges with a validation accuracy (root-mean-square error) of 234 m3/s (23% of the mean discharge). Our results demonstrate that ArcticDEM can accurately measure spatial and temporal variations of river surfaces, providing a new and powerful data set for hydrologic analysis.

  20. Evaluation of scale invariance in physiological signals by means of balanced estimation of diffusion entropy

    Science.gov (United States)

    Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong

    2012-11-01

    By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (˜0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (˜0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (˜102), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.

  1. Estimating breeding season abundance of golden-cheeked warblers in Texas, USA

    KAUST Repository

    Mathewson, Heather A.

    2012-02-15

    Population abundance estimates using predictive models are important for describing habitat use and responses to population-level impacts, evaluating conservation status of a species, and for establishing monitoring programs. The golden-cheeked warbler (Setophaga chrysoparia) is a neotropical migratory bird that was listed as federally endangered in 1990 because of threats related to loss and fragmentation of its woodland habitat. Since listing, abundance estimates for the species have mainly relied on localized population studies on public lands and qualitative-based methods. Our goal was to estimate breeding population size of male warblers using a predictive model based on metrics for patches of woodland habitat throughout the species\\' breeding range. We first conducted occupancy surveys to determine range-wide distribution. We then conducted standard point-count surveys on a subset of the initial sampling locations to estimate density of males. Mean observed patch-specific density was 0.23 males/ha (95% CI = 0.197-0.252, n = 301). We modeled the relationship between patch-specific density of males and woodland patch characteristics (size and landscape composition) and predicted patch occupancy. The probability of patch occupancy, derived from a model that used patch size and landscape composition as predictor variables while addressing effects of spatial relatedness, best predicted patch-specific density. We predicted patch-specific densities as a function of occupancy probability and estimated abundance of male warblers across 63,616 woodland patches accounting for 1.678 million ha of potential warbler habitat. Using a Monte Carlo simulation, our approach yielded a range-wide male warbler population estimate of 263,339 (95% CI: 223,927-302,620). Our results provide the first abundance estimate using habitat and count data from a sampling design focused on range-wide inference. Managers can use the resulting model as a tool to support conservation planning

  2. A comparison of morphological and molecular-based surveys to estimate the species richness of Chaetoceros and Thalassiosira (bacillariophyta, in the Bay of Fundy.

    Directory of Open Access Journals (Sweden)

    Sarah E Hamsher

    Full Text Available The goal of this study was to compare the ability of morphology and molecular-based surveys to estimate species richness for two species-rich diatom genera, Chaetoceros Ehrenb. and Thalassiosira Cleve, in the Bay of Fundy. Phytoplankton tows were collected from two sites at intervals over two years and subsampled for morphology-based surveys (2010, 2011, a culture-based DNA reference library (DRL; 2010, and a molecular-based survey (2011. The DRL and molecular-based survey utilized the 3' end of the RUBISCO large subunit (rbcL-3P to identify genetic species groups (based on 0.1% divergence in rbcL-3P, which were subsequently identified morphologically to allow comparisons to the morphology-based survey. Comparisons were compiled for the year (2011 by site (n = 2 and by season (n = 3. Of the 34 taxa included in the comparisons, 50% of taxa were common to both methods, 35% were unique to the molecular-based survey, and 12% were unique to the morphology-based survey, while the remaining 3% of taxa were unidentified genetic species groups. The morphology-based survey excelled at identifying rare taxa in individual tow subsamples, which were occasionally missed with the molecular approach used here, while the molecular methods (the DRL and molecular-based survey, uncovered nine cryptic species pairs and four previously overlooked species. The last mentioned were typically difficult to identify and were generically assigned to Thalassiosira spp. during the morphology-based survey. Therefore, for now we suggest a combined approach encompassing routine morphology-based surveys accompanied by periodic molecular-based surveys to monitor for cryptic and difficult to identify taxa. As sequencing technologies improve, molecular-based surveys should become routine, leading to a more accurate representation of species composition and richness in monitoring programs.

  3. A flexible model for correlated medical costs, with application to medical expenditure panel survey data.

    Science.gov (United States)

    Chen, Jinsong; Liu, Lei; Shih, Ya-Chen T; Zhang, Daowen; Severini, Thomas A

    2016-03-15

    We propose a flexible model for correlated medical cost data with several appealing features. First, the mean function is partially linear. Second, the distributional form for the response is not specified. Third, the covariance structure of correlated medical costs has a semiparametric form. We use extended generalized estimating equations to simultaneously estimate all parameters of interest. B-splines are used to estimate unknown functions, and a modification to Akaike information criterion is proposed for selecting knots in spline bases. We apply the model to correlated medical costs in the Medical Expenditure Panel Survey dataset. Simulation studies are conducted to assess the performance of our method. Copyright © 2015 John Wiley & Sons, Ltd.

  4. An agent-based approach with collaboration among agents. Estimation of wholesale electricity price on PJM and artificial data generated by a mean reverting model

    International Nuclear Information System (INIS)

    Sueyoshi, Toshiyuki

    2010-01-01

    This study examines the performance of MAIS (Multi-Agent Intelligent Simulator) equipped with various learning capabilities. In addition to the learning capabilities, the proposed MAIS incorporates collaboration among agents. The proposed MAIS is applied to estimate a dynamic change of wholesale electricity price in PJM (Pennsylvania-New Jersey-Mainland) and an artificial data set generated by a mean reverting model. Using such different types of data sets, the methodological validity of MAIS is confirmed by comparing it with other well-known alternatives in computer science. This study finds that the MAIS needs to incorporate both the mean reverting model and the collaboration behavior among agents in order to enhance its estimation capability. The MAIS discussed in this study will provide research on energy economics with a new numerical capability that can investigate a dynamic change of not only wholesale electricity price but also speculation and learning process of traders. (author)

  5. Estimates on the mean current in a sphere of plasma

    International Nuclear Information System (INIS)

    Nunez, Manuel

    2003-01-01

    Several turbulent dynamo models predict the concentration of the magnetic field in chaotic plasmas in sheets with the field vector pointing alternatively in opposite directions, which should produce strong current sheets. It is proved that if the plasma is contained in a rigid sphere with perfectly conducting boundary the geometry of these sheets must be balanced so that the mean current remains essentially bounded by the Coulomb gauged mean vector potential of the field. This magnitude remains regular even for the sharp field variations expected in a chaotic flow. For resistive plasmas the same arguments imply that the contribution to the total current of the regions near the boundary compensates the current of the central part of the sphere

  6. Estimating 137Cs ingestion doses to Saamis in Kautokeino (Norway) using whole body counting vs. dietary survey results and food samples

    International Nuclear Information System (INIS)

    Skuterud, L.; Bergan, T.; Mehli, H.

    2002-01-01

    From 1965 to 1990 whole body measurements were carried out on an annual basis. Since then, 3-year cycles have been followed. In most years, the reindeer keepers have provided samples of reindeer meat for radiocaesium analysis. In 1989-1990 and 1999 dietary surveys were performed in conjunction with the whole-body monitoring. Earlier diet information is available from a separate study in 1963. Rough estimates of the radiocaesium intake by the studied population in Kautokeino have indicated that the dietary surveys have overestimated the radiocaesium intake. The aim of the present study was to evaluate the available information from Kautokeino, and to derive some conclusions regarding the reindeer meat consumption by today's reindeer keepers, and what 137 Cs ingestion doses they are exposed to. (LN)

  7. Acoustic surveys for juvenile anchovy in the Bay of Biscay: Abundance estimate as an indicator of the next year's recruitment and spatial distribution patterns

    KAUST Repository

    Boyra, Guillermo

    2013-08-16

    A series of acoustic surveys (JUVENA) began in 2003 targeting juvenile anchovy (Engraulis encrasicolus) in the Bay of Biscay. A specific methodology was designed for mapping and estimating juvenile abundance annually, four months after the spawning season. After eight years of the survey, a consistent picture of the spatial pattern of the juvenile anchovy has emerged. Juveniles show a vertical and horizontal distribution pattern that depends on size. The younger individuals are found isolated from other species in waters closer to the surface, mainly off the shelf within the mid-southern region of the bay. The largest juveniles are usually found deeper and closer to the shore in the company of adult anchovy and other pelagic species. In these eight years, the survey has covered a wide range of juvenile abundances, and the estimates show a significant positive relationship between the juvenile biomasses and the one-year-old recruits of the following year. This demonstrates that the JUVENA index provides an early indication of the strength of next year\\'s recruitment to the fishery and can therefore be used to improve the management advice for the fishery of this short-lived species. © 2013 International Council for the Exploration of the Sea.

  8. Acoustic surveys for juvenile anchovy in the Bay of Biscay: Abundance estimate as an indicator of the next year's recruitment and spatial distribution patterns

    KAUST Repository

    Boyra, Guillermo; Martí nez, U.; Cotano, Unai; Begoñ a Santos, Maria; Irigoien, Xabier; Uriarte, André s

    2013-01-01

    A series of acoustic surveys (JUVENA) began in 2003 targeting juvenile anchovy (Engraulis encrasicolus) in the Bay of Biscay. A specific methodology was designed for mapping and estimating juvenile abundance annually, four months after the spawning season. After eight years of the survey, a consistent picture of the spatial pattern of the juvenile anchovy has emerged. Juveniles show a vertical and horizontal distribution pattern that depends on size. The younger individuals are found isolated from other species in waters closer to the surface, mainly off the shelf within the mid-southern region of the bay. The largest juveniles are usually found deeper and closer to the shore in the company of adult anchovy and other pelagic species. In these eight years, the survey has covered a wide range of juvenile abundances, and the estimates show a significant positive relationship between the juvenile biomasses and the one-year-old recruits of the following year. This demonstrates that the JUVENA index provides an early indication of the strength of next year's recruitment to the fishery and can therefore be used to improve the management advice for the fishery of this short-lived species. © 2013 International Council for the Exploration of the Sea.

  9. Estimated Trans-Lamina Cribrosa Pressure Differences in Low-Teen and High-Teen Intraocular Pressure Normal Tension Glaucoma: The Korean National Health and Nutrition Examination Survey.

    Directory of Open Access Journals (Sweden)

    Si Hyung Lee

    Full Text Available To investigate the association between estimated trans-lamina cribrosa pressure difference (TLCPD and prevalence of normal tension glaucoma (NTG with low-teen and high-teen intraocular pressure (IOP using a population-based study design.A total of 12,743 adults (≥ 40 years of age who participated in the Korean National Health and Nutrition Examination Survey (KNHANES from 2009 to 2012 were included. Using a previously developed formula, cerebrospinal fluid pressure (CSFP in mmHg was estimated as 0.55 × body mass index (kg/m2 + 0.16 × diastolic blood pressure (mmHg-0.18 × age (years-1.91. TLCPD was calculated as IOP-CSFP. The NTG subjects were divided into two groups according to IOP level: low-teen NTG (IOP ≤ 15 mmHg and high-teen NTG (15 mmHg < IOP ≤ 21 mmHg groups. The association between TLCPD and the prevalence of NTG was assessed in the low- and high-teen IOP groups.In the normal population (n = 12,069, the weighted mean estimated CSFP was 11.69 ± 0.04 mmHg and the weighted mean TLCPD 2.31 ± 0.06 mmHg. Significantly higher TLCPD (p < 0.001; 6.48 ± 0.27 mmHg was found in the high-teen NTG compared with the normal group. On the other hand, there was no significant difference in TLCPD between normal and low-teen NTG subjects (p = 0.395; 2.31 ± 0.06 vs. 2.11 ± 0.24 mmHg. Multivariate logistic regression analysis revealed that TLCPD was significantly associated with the prevalence of NTG in the high-teen IOP group (p = 0.006; OR: 1.09; 95% CI: 1.02, 1.15, but not the low-teen IOP group (p = 0.636. Instead, the presence of hypertension was significantly associated with the prevalence of NTG in the low-teen IOP group (p < 0.001; OR: 1.65; 95% CI: 1.26, 2.16.TLCPD was significantly associated with the prevalence of NTG in high-teen IOP subjects, but not low-teen IOP subjects, in whom hypertension may be more closely associated. This study suggests that the underlying mechanisms may differ between low-teen and high-teen NTG patients.

  10. Evaluation of SNODAS snow depth and snow water equivalent estimates for the Colorado Rocky Mountains, USA

    Science.gov (United States)

    Clow, David W.; Nanus, Leora; Verdin, Kristine L.; Schmidt, Jeffrey

    2012-01-01

    The National Weather Service's Snow Data Assimilation (SNODAS) program provides daily, gridded estimates of snow depth, snow water equivalent (SWE), and related snow parameters at a 1-km2 resolution for the conterminous USA. In this study, SNODAS snow depth and SWE estimates were compared with independent, ground-based snow survey data in the Colorado Rocky Mountains to assess SNODAS accuracy at the 1-km2 scale. Accuracy also was evaluated at the basin scale by comparing SNODAS model output to snowmelt runoff in 31 headwater basins with US Geological Survey stream gauges. Results from the snow surveys indicated that SNODAS performed well in forested areas, explaining 72% of the variance in snow depths and 77% of the variance in SWE. However, SNODAS showed poor agreement with measurements in alpine areas, explaining 16% of the variance in snow depth and 30% of the variance in SWE. At the basin scale, snowmelt runoff was moderately correlated (R2 = 0.52) with SNODAS model estimates. A simple method for adjusting SNODAS SWE estimates in alpine areas was developed that uses relations between prevailing wind direction, terrain, and vegetation to account for wind redistribution of snow in alpine terrain. The adjustments substantially improved agreement between measurements and SNODAS estimates, with the R2 of measured SWE values against SNODAS SWE estimates increasing from 0.42 to 0.63 and the root mean square error decreasing from 12 to 6 cm. Results from this study indicate that SNODAS can provide reliable data for input to moderate-scale to large-scale hydrologic models, which are essential for creating accurate runoff forecasts. Refinement of SNODAS SWE estimates for alpine areas to account for wind redistribution of snow could further improve model performance. Published 2011. This article is a US Government work and is in the public domain in the USA.

  11. Treatment-seeking behaviour in low- and middle-income countries estimated using a Bayesian model

    Directory of Open Access Journals (Sweden)

    Victor A. Alegana

    2017-04-01

    Full Text Available Abstract Background Seeking treatment in formal healthcare for uncomplicated infections is vital to combating disease in low- and middle-income countries (LMICs. Healthcare treatment-seeking behaviour varies within and between communities and is modified by socio-economic, demographic, and physical factors. As a result, it remains a challenge to quantify healthcare treatment-seeking behaviour using a metric that is comparable across communities. Here, we present an application for transforming individual categorical responses (actions related to fever to a continuous probabilistic estimate of fever treatment for one country in Sub-Saharan Africa (SSA. Methods Using nationally representative household survey data from the 2013 Demographic and Health Survey (DHS in Namibia, individual-level responses (n = 1138 were linked to theoretical estimates of travel time to the nearest public or private health facility. Bayesian Item Response Theory (IRT models were fitted via Markov Chain Monte Carlo (MCMC simulation to estimate parameters related to fever treatment and estimate probability of treatment for children under five years. Different models were implemented to evaluate computational needs and the effect of including predictor variables such as rurality. The mean treatment rates were then estimated at regional level. Results Modelling results suggested probability of fever treatment was highest in regions with relatively high incidence of malaria historically. The minimum predicted threshold probability of seeking treatment was 0.3 (model 1: 0.340; 95% CI 0.155–0.597, suggesting that even in populations at large distances from facilities, there was still a 30% chance of an individual seeking treatment for fever. The agreement between correctly predicted probability of treatment at individual level based on a subset of data (n = 247 was high (AUC = 0.978, with a sensitivity of 96.7% and a specificity of 75.3%. Conclusion We have shown

  12. Mahalanobis' Contributions to Sample Surveys

    Indian Academy of Sciences (India)

    Sample Survey started its operations in October 1950 under the ... and adopted random cuts for estimating the acreage under jute ... demographic factors relating to indebtedness, unemployment, ... traffic surveys, demand for currency coins and average life of .... Mahalanobis derived the optimum allocation in stratified.

  13. Survey of industry methods for producing highly reliable software

    International Nuclear Information System (INIS)

    Lawrence, J.D.; Persons, W.L.

    1994-11-01

    The Nuclear Reactor Regulation Office of the US Nuclear Regulatory Commission is charged with assessing the safety of new instrument and control designs for nuclear power plants which may use computer-based reactor protection systems. Lawrence Livermore National Laboratory has evaluated the latest techniques in software reliability for measurement, estimation, error detection, and prediction that can be used during the software life cycle as a means of risk assessment for reactor protection systems. One aspect of this task has been a survey of the software industry to collect information to help identify the design factors used to improve the reliability and safety of software. The intent was to discover what practices really work in industry and what design factors are used by industry to achieve highly reliable software. The results of the survey are documented in this report. Three companies participated in the survey: Computer Sciences Corporation, International Business Machines (Federal Systems Company), and TRW. Discussions were also held with NASA Software Engineering Lab/University of Maryland/CSC, and the AIAA Software Reliability Project

  14. Surveys of environmental DNA (eDNA): a new approach to estimate occurrence in Vulnerable manatee populations

    Science.gov (United States)

    Hunter, Margaret; Meigs-Friend, Gaia; Ferrante, Jason; Takoukam Kamla, Aristide; Dorazio, Robert; Keith Diagne, Lucy; Luna, Fabia; Lanyon, Janet M.; Reid, James P.

    2018-01-01

    Environmental DNA (eDNA) detection is a technique used to non-invasively detect cryptic, low density, or logistically difficult-to-study species, such as imperiled manatees. For eDNA measurement, genetic material shed into the environment is concentrated from water samples and analyzed for the presence of target species. Cytochrome bquantitative PCR and droplet digital PCR eDNA assays were developed for the 3 Vulnerable manatee species: African, Amazonian, and both subspecies of the West Indian (Florida and Antillean) manatee. Environmental DNA assays can help to delineate manatee habitat ranges, high use areas, and seasonal population changes. To validate the assay, water was analyzed from Florida’s east coast containing a high-density manatee population and produced 31564 DNA molecules l-1on average and high occurrence (ψ) and detection (p) estimates (ψ = 0.84 [0.40-0.99]; p = 0.99 [0.95-1.00]; limit of detection 3 copies µl-1). Similar occupancy estimates were produced in the Florida Panhandle (ψ = 0.79 [0.54-0.97]) and Cuba (ψ = 0.89 [0.54-1.00]), while occupancy estimates in Cameroon were lower (ψ = 0.49 [0.09-0.95]). The eDNA-derived detection estimates were higher than those generated using aerial survey data on the west coast of Florida and may be effective for population monitoring. Subsequent eDNA studies could be particularly useful in locations where manatees are (1) difficult to identify visually (e.g. the Amazon River and Africa), (2) are present in patchy distributions or are on the verge of extinction (e.g. Jamaica, Haiti), and (3) where repatriation efforts are proposed (e.g. Brazil, Guadeloupe). Extension of these eDNA techniques could be applied to other imperiled marine mammal populations such as African and Asian dugongs.

  15. Equations for estimating bankfull channel geometry and discharge for streams in Massachusetts

    Science.gov (United States)

    Bent, Gardner C.; Waite, Andrew M.

    2013-01-01

    Regression equations were developed for estimating bankfull geometry—width, mean depth, cross-sectional area—and discharge for streams in Massachusetts. The equations provide water-resource and conservation managers with methods for estimating bankfull characteristics at specific stream sites in Massachusetts. This information can be used for the adminstration of the Commonwealth of Massachusetts Rivers Protection Act of 1996, which establishes a protected riverfront area extending from the mean annual high-water line corresponding to the elevation of bankfull discharge along each side of a perennial stream. Additionally, information on bankfull channel geometry and discharge are important to Federal, State, and local government agencies and private organizations involved in stream assessment and restoration projects. Regression equations are based on data from stream surveys at 33 sites (32 streamgages and 1 crest-stage gage operated by the U.S. Geological Survey) in and near Massachusetts. Drainage areas of the 33 sites ranged from 0.60 to 329 square miles (mi2). At 27 of the 33 sites, field data were collected and analyses were done to determine bankfull channel geometry and discharge as part of the present study. For 6 of the 33 sites, data on bankfull channel geometry and discharge were compiled from other studies done by the U.S. Geological Survey, Natural Resources Conservation Service of the U.S. Department of Agriculture, and the Vermont Department of Environmental Conservation. Similar techniques were used for field data collection and analysis for bankfull channel geometry and discharge at all 33 sites. Recurrence intervals of the bankfull discharge, which represent the frequency with which a stream fills its channel, averaged 1.53 years (median value 1.34 years) at the 33 sites. Simple regression equations were developed for bankfull width, mean depth, cross-sectional area, and discharge using drainage area, which is the most significant explanatory

  16. On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2017-05-01

    Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.

  17. Estimating a WTP-based value of a QALY: the 'chained' approach.

    Science.gov (United States)

    Robinson, Angela; Gyrd-Hansen, Dorte; Bacon, Philomena; Baker, Rachel; Pennington, Mark; Donaldson, Cam

    2013-09-01

    A major issue in health economic evaluation is that of the value to place on a quality adjusted life year (QALY), commonly used as a measure of health care effectiveness across Europe. This critical policy issue is reflected in the growing interest across Europe in development of more sound methods to elicit such a value. EuroVaQ was a collaboration of researchers from 9 European countries, the main aim being to develop more robust methods to determine the monetary value of a QALY based on surveys of the general public. The 'chained' approach of deriving a societal willingness-to-pay (WTP) based monetary value of a QALY used the following basic procedure. First, utility values were elicited for health states using the standard gamble (SG) and time trade off (TTO) methods. Second, a monetary value to avoid some risk/duration of that health state was elicited and the implied WTP per QALY estimated. We developed within EuroVaQ an adaptation to the 'chained approach' that attempts to overcome problems documented previously (in particular the tendency to arrive at exceedingly high WTP per QALY values). The survey was administered via Internet panels in each participating country and almost 22,000 responses achieved. Estimates of the value of a QALY varied across question and were, if anything, on the low side with the (trimmed) 'all country' mean WTP per QALY ranging from $18,247 to $34,097. Untrimmed means were considerably higher and medians considerably lower in each case. We conclude that the adaptation to the chained approach described here is a potentially useful technique for estimating WTP per QALY. A number of methodological challenges do still exist, however, and there is scope for further refinement. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Gear and survey efficiency of patent tongs for oyster populations on restoration reefs.

    Science.gov (United States)

    Schulte, David M; Lipcius, Romuald N; Burke, Russell P

    2018-01-01

    Surveys of restored oyster reefs need to produce accurate population estimates to assess the efficacy of restoration. Due to the complex structure of subtidal oyster reefs, one effective and efficient means to sample is by patent tongs, rather than SCUBA, dredges, or bottom cores. Restored reefs vary in relief and oyster density, either of which could affect survey efficiency. This study is the first to evaluate gear (the first full grab) and survey (which includes selecting a specific half portion of the first grab for further processing) efficiencies of hand-operated patent tongs as a function of reef height and oyster density on subtidal restoration reefs. In the Great Wicomico River, a tributary of lower Chesapeake Bay, restored reefs of high- and low-relief (25-45 cm, and 8-12 cm, respectively) were constructed throughout the river as the first large-scale oyster sanctuary reef restoration effort (sanctuary acreage > 20 ha at one site) in Chesapeake Bay. We designed a metal frame to guide a non-hydraulic mechanical patent tong repeatedly into the same plot on a restored reef until all oysters within the grab area were captured. Full capture was verified by an underwater remotely-operated vehicle. Samples (n = 19) were taken on nine different reefs, including five low- (n = 8) and four high-relief reefs (n = 11), over a two-year period. The gear efficiency of the patent tong was estimated to be 76% (± 5% standard error), whereas survey efficiency increased to 81% (± 10%) due to processing. Neither efficiency differed significantly between young-of-the-year oysters (spat) and adults, high- and low-relief reefs, or years. As this type of patent tong is a common and cost-effective tool to evaluate oyster restoration projects as well as population density on fished habitat, knowing the gear and survey efficiencies allows for accurate and precise population estimates.

  19. Mean Green operators of deformable fiber networks embedded in a compliant matrix and property estimates

    Science.gov (United States)

    Franciosi, Patrick; Spagnuolo, Mario; Salman, Oguz Umut

    2018-04-01

    Composites comprising included phases in a continuous matrix constitute a huge class of meta-materials, whose effective properties, whether they be mechanical, physical or coupled, can be selectively optimized by using appropriate phase arrangements and architectures. An important subclass is represented by "network-reinforced matrices," say those materials in which one or more of the embedded phases are co-continuous with the matrix in one or more directions. In this article, we present a method to study effective properties of simple such structures from which more complex ones can be accessible. Effective properties are shown, in the framework of linear elasticity, estimable by using the global mean Green operator for the entire embedded fiber network which is by definition through sample spanning. This network operator is obtained from one of infinite planar alignments of infinite fibers, which the network can be seen as an interpenetrated set of, with the fiber interactions being fully accounted for in the alignments. The mean operator of such alignments is given in exact closed form for isotropic elastic-like or dielectric-like matrices. We first exemplify how these operators relevantly provide, from classic homogenization frameworks, effective properties in the case of 1D fiber bundles embedded in an isotropic elastic-like medium. It is also shown that using infinite patterns with fully interacting elements over their whole influence range at any element concentration suppresses the dilute approximation limit of these frameworks. We finally present a construction method for a global operator of fiber networks described as interpenetrated such bundles.

  20. Harmonizing methods for wildlife abundance estimation and pathogen detection in Europe-a questionnaire survey on three selected host-pathogen combinations

    DEFF Research Database (Denmark)

    Schulz, Jana; Ryser-Degiorgis, Marie-Pierre; Kuiken, Thijs

    2017-01-01

    Background: The need for wildlife health surveillance as part of disease control in wildlife, domestic animals and humans on the global level is widely recognized. However, the objectives, methods and intensity of existing wildlife health surveillance programs vary greatly among European countries......, resulting in a patchwork of data that are difficult to merge and compare. This survey aimed at evaluating the need and potential for data harmonization in wildlife health in Europe. The specific objective was to collect information on methods currently used to estimate host abundance and pathogen prevalence...... estimation, there is an urgent need to develop tools for the routine collection of host abundance data in a harmonized way. Wildlife health experts are encouraged to apply the harmonized APHAEA protocols in epidemiological studies in wildlife and to increase cooperation....

  1. Adult proxy responses to a survey of children's dermal soil contact activities.

    Science.gov (United States)

    Wong, E Y; Shirai, J H; Garlock, T J; Kissel, J C

    2000-01-01

    Contaminated site cleanup decisions may require estimation of dermal exposures to soil. Telephone surveys represent one means of obtaining relevant activity pattern data. The initial Soil Contact Survey (SCS-I), which primarily gathered information on the activities of adults, was conducted in 1996. Data describing adult behaviors have been previously reported. Results from a second Soil Contact Survey (SCS-II), performed in 1998-1999 and focused on children's activity patterns, are reported here. Telephone surveys were used to query a randomly selected sample of U.S. households. A randomly chosen child, under the age of 18 years, was targeted in each responding household having children. Play activities as well as bathing patterns were investigated to quantify total exposure time, defined as activity time plus delay until washing. Of 680 total survey respondents, 500 (73.5%) reported that their child played outdoors on bare dirt or mixed grass and dirt surfaces. Among these "players," the median reported play frequency was 7 days/week in warm weather and 3 days/week in cold weather. Median play duration was 3 h/day in warm weather and 1 h/day in cold weather. Hand washes were reported to occur a median of 4 times per day in both warm and cold weather months. Bath or shower median frequency was seven times per week in both warm and cold weather. Finally, based on clothing choice data gathered in SCS-I, a median of about 37% of total skin surface is estimated to be exposed during young children's warm weather outdoor play.

  2. The Local Volume HI Survey (LVHIS)

    Science.gov (United States)

    Koribalski, Bärbel S.; Wang, Jing; Kamphuis, P.; Westmeier, T.; Staveley-Smith, L.; Oh, S.; López-Sánchez, Á. R.; Wong, O. I.; Ott, J.; de Blok, W. J. G.; Shao, L.

    2018-02-01

    The `Local Volume HI Survey' (LVHIS) comprises deep H I spectral line and 20-cm radio continuum observations of 82 nearby, gas-rich galaxies, supplemented by multi-wavelength images. Our sample consists of all galaxies with Local Group velocities vLG atlas, including the overall gas distribution, mean velocity field, velocity dispersion and position-velocity diagrams, together with a homogeneous set of measured and derived galaxy properties. Our primary goal is to investigate the H I morphologies, kinematics and environment at high resolution and sensitivity. LVHIS galaxies represent a wide range of morphologies and sizes; our measured H I masses range from ˜107 to 1010 M⊙, based on independent distance estimates. The LVHIS galaxy atlas (incl. FITS files) is available on-line.

  3. Local digital algorithms for estimating the mean integrated curvature of r-regular sets

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    , no asymptotically unbiased estimator of this type exists in dimension greater than or equal to three, while for stationary isotropic lattices, asymptotically unbiased estimators are plenty. Both results follow from a general formula that we state and prove, describing the asymptotic behavior of hit...

  4. Analyzing Repeated Measures Marginal Models on Sample Surveys with Resampling Methods

    Directory of Open Access Journals (Sweden)

    James D. Knoke

    2005-12-01

    Full Text Available Packaged statistical software for analyzing categorical, repeated measures marginal models on sample survey data with binary covariates does not appear to be available. Consequently, this report describes a customized SAS program which accomplishes such an analysis on survey data with jackknifed replicate weights for which the primary sampling unit information has been suppressed for respondent confidentiality. First, the program employs the Macro Language and the Output Delivery System (ODS to estimate the means and covariances of indicator variables for the response variables, taking the design into account. Then, it uses PROC CATMOD and ODS, ignoring the survey design, to obtain the design matrix and hypothesis test specifications. Finally, it enters these results into another run of CATMOD, which performs automated direct input of the survey design specifications and accomplishes the appropriate analysis. This customized SAS program can be employed, with minor editing, to analyze general categorical, repeated measures marginal models on sample surveys with replicate weights. Finally, the results of our analysis accounting for the survey design are compared to the results of two alternate analyses of the same data. This comparison confirms that such alternate analyses, which do not properly account for the design, do not produce useful results.

  5. Refining estimates of availability bias to improve assessments of the conservation status of an endangered dolphin.

    Science.gov (United States)

    Sucunza, Federico; Danilewicz, Daniel; Cremer, Marta; Andriolo, Artur; Zerbini, Alexandre N

    2018-01-01

    Estimation of visibility bias is critical to accurately compute abundance of wild populations. The franciscana, Pontoporia blainvillei, is considered the most threatened small cetacean in the southwestern Atlantic Ocean. Aerial surveys are considered the most effective method to estimate abundance of this species, but many existing estimates have been considered unreliable because they lack proper estimation of correction factors for visibility bias. In this study, helicopter surveys were conducted to determine surfacing-diving intervals of franciscanas and to estimate availability for aerial platforms. Fifteen hours were flown and 101 groups of 1 to 7 franciscanas were monitored, resulting in a sample of 248 surface-dive cycles. The mean surfacing interval and diving interval times were 16.10 seconds (SE = 9.74) and 39.77 seconds (SE = 29.06), respectively. Availability was estimated at 0.39 (SE = 0.01), a value 16-46% greater than estimates computed from diving parameters obtained from boats or from land. Generalized mixed-effects models were used to investigate the influence of biological and environmental predictors on the proportion of time franciscana groups are visually available to be seen from an aerial platform. These models revealed that group size was the main factor influencing the proportion at surface. The use of negatively biased estimates of availability results in overestimation of abundance, leads to overly optimistic assessments of extinction probabilities and to potentially ineffective management actions. This study demonstrates that estimates of availability must be computed from suitable platforms to ensure proper conservation decisions are implemented to protect threatened species such as the franciscana.

  6. A flexible and coherent test/estimation procedure based on restricted mean survival times for censored time-to-event data in randomized clinical trials.

    Science.gov (United States)

    Horiguchi, Miki; Cronin, Angel M; Takeuchi, Masahiro; Uno, Hajime

    2018-04-22

    In randomized clinical trials where time-to-event is the primary outcome, almost routinely, the logrank test is prespecified as the primary test and the hazard ratio is used to quantify treatment effect. If the ratio of 2 hazard functions is not constant, the logrank test is not optimal and the interpretation of hazard ratio is not obvious. When such a nonproportional hazards case is expected at the design stage, the conventional practice is to prespecify another member of weighted logrank tests, eg, Peto-Prentice-Wilcoxon test. Alternatively, one may specify a robust test as the primary test, which can capture various patterns of difference between 2 event time distributions. However, most of those tests do not have companion procedures to quantify the treatment difference, and investigators have fallen back on reporting treatment effect estimates not associated with the primary test. Such incoherence in the "test/estimation" procedure may potentially mislead clinicians/patients who have to balance risk-benefit for treatment decision. To address this, we propose a flexible and coherent test/estimation procedure based on restricted mean survival time, where the truncation time τ is selected data dependently. The proposed procedure is composed of a prespecified test and an estimation of corresponding robust and interpretable quantitative treatment effect. The utility of the new procedure is demonstrated by numerical studies based on 2 randomized cancer clinical trials; the test is dramatically more powerful than the logrank, Wilcoxon tests, and the restricted mean survival time-based test with a fixed τ, for the patterns of difference seen in these cancer clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Semiparametric Inference in a GARCH-in-Mean Model

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Dahl, Christian Møller; Iglesias, Emma M.

    A new semiparametric estimator for an empirical asset pricing model with general nonpara- metric risk-return tradeoff and a GARCH process for the underlying volatility is introduced. The estimator does not rely on any initial parametric estimator of the conditional mean func- tion, and this feature...... facilitates the derivation of asymptotic theory under possible nonlinearity of unspecified form of the risk-return tradeoff. Besides the nonlinear GARCH-in-mean effect, our specification accommodates exogenous regressors that are typically used as conditioning variables entering linearly in the mean equation...... with the fully parametric approach and the iterative semiparametric approach using a parametric initial esti- mate proposed by Conrad and Mammen (2008). An empirical application to the daily S&P 500 stock market returns suggests that the linear relation between conditional expected return and conditional...

  8. Comparing two survey methods of measuring health-related indicators: Lot Quality Assurance Sampling and Demographic Health Surveys.

    Science.gov (United States)

    Anoke, Sarah C; Mwai, Paul; Jeffery, Caroline; Valadez, Joseph J; Pagano, Marcello

    2015-12-01

    Two common methods used to measure indicators for health programme monitoring and evaluation are the demographic and health surveys (DHS) and lot quality assurance sampling (LQAS); each one has different strengths. We report on both methods when utilised in comparable situations. We compared 24 indicators in south-west Uganda, where data for prevalence estimations were collected independently for the two methods in 2011 (LQAS: n = 8876; DHS: n = 1200). Data were stratified (e.g. gender and age) resulting in 37 comparisons. We used a two-sample two-sided Z-test of proportions to compare both methods. The average difference between LQAS and DHS for 37 estimates was 0.062 (SD = 0.093; median = 0.039). The average difference among the 21 failures to reject equality of proportions was 0.010 (SD = 0.041; median = 0.009); among the 16 rejections, it was 0.130 (SD = 0.010, median = 0.118). Seven of the 16 rejections exhibited absolute differences of 0.10 and 0.20 (mean = 0.261, SD = 0.083). There is 75.7% agreement across the two surveys. Both methods yield regional results, but only LQAS provides information at less granular levels (e.g. the district level) where managerial action is taken. The cost advantage and localisation make LQAS feasible to conduct more frequently, and provides the possibility for real-time health outcomes monitoring. © 2015 The Authors. Tropical Medicine & International Health Published by John Wiley & Sons Ltd.

  9. Billfish Angler Survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Billfish Angler Survey provides estimates of billfish angling activities in the Pacific and Indian Oceans. This collection of recreational billfish catch and...

  10. Variance estimates for transport in stochastic media by means of the master equation

    International Nuclear Information System (INIS)

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.

    2013-01-01

    The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models. (authors)

  11. Timing of spring surveys for midcontinent sandhill cranes

    Science.gov (United States)

    Pearse, Aaron T.; Krapu, Gary L.; Brandt, David A.; Sargeant, Glen A.

    2015-01-01

    The U.S. Fish and Wildlife Service has used spring aerial surveys to estimate numbers of migrating sandhill cranes (Grus canadensis) staging in the Platte River Valley of Nebraska, USA. Resulting estimates index the abundance of the midcontinent sandhill crane population and inform harvest management decisions. However, annual changes in the index have exceeded biologically plausible changes in population size (>50% of surveys between 1982 and 2013 indicate >±20% change), raising questions about nuisance variation due to factors such as migration chronology. We used locations of cranes marked with very-high-frequency transmitters to estimate migration chronology (i.e., proportions of cranes present within the Platte River Valley). We also used roadside surveys to determine the percentage of cranes staging at the Platte River Valley but outside of the survey area when surveys occur. During March 2001–2007, an average of 86% (71–94%; SD = 7%) of marked cranes were present along the Platte River during scheduled survey dates, and 0–11% of cranes that were present along the Platte River were not within the survey boundaries. Timing of the annual survey generally corresponded with presence of the greatest proportion of marked cranes and with least inter-annual variation; consequently, accuracy of estimates could not have been improved by surveying on different dates. Conducting the survey earlier would miss birds not yet arriving at the staging site; whereas, a later date would occur at a time when a larger portion of birds may have already departed the staging site and when a greater proportion of birds occurred outside of the surveyed area. Index values used to monitor midcontinent sandhill crane abundance vary annually, in part, due to annual variation in migration chronology and to spatial distribution of cranes in the Platte River Valley; therefore, managers should interpret survey results cautiously, with awareness of a continuing need to identify and

  12. American Samoa Shore-based Creel Survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The DMWR staff has also conducted shore-based creel surveys which also have 2 major sub-surveys; one to estimate participation (fishing effort), and one to provide...

  13. Estimation of population doses from diagnostic medical examinations in Japan, 1974. IV. Dose estimation of fetus exposed in utero to diagnostic x rays

    Energy Technology Data Exchange (ETDEWEB)

    Hashizume, T; Maruyama, T; Kumamoto, Y [National Inst. of Radiological Sciences, Chiba (Japan)

    1976-07-01

    In fetus exposed in utero to diagnostic x rays for the medical examinations of the mother, the absorbed dose has been estimated on the basis of a 1974 nation wide radiological survey. The results of the survey showed that the number of radiographs per year connected with pregnant women was 0.32 million for chest examination excluding mass surveys. 0.29 million for obstetrical examinations including pelvimetry, and 0.21 million for abdominal and pelvic examinations with a total of 0.82 million. The dose absorbed in the fetus was measured with an ionization chamber placed at the hypothetical center of the fetus in an ''average woman'' Rando phantom in which a maternal body was simulated by adding MixDp materials. ''The collective dose'' to the fetus in the pregnant women receiving a given type of examination was calculated from the number of radiographs per year connected with the pregnant women and the fetal doses. The percapita mean marrow dose (CMD), the leukemia significant dose (LSD) and the genetically significant dose (GSD) for the fetus were determined from the collective dose, taking into account the birth expectancy, the child expectancy, life expectancy and significant factor for the fetus. The collective dose to the fetus was estimated to be 9.3 x 10/sup 4/ man rad per year. The resultant values of CMD, LSD and GSD were 0.81 mrad per year, 0.79 mrad per person per year and 1.44 mrad per person per year, respectively.

  14. Transferring 2001 National Household Travel Survey

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Patricia S [ORNL; Reuscher, Tim [ORNL; Schmoyer, Richard L [ORNL; Chin, Shih-Miao [ORNL

    2007-05-01

    Policy makers rely on transportation statistics, including data on personal travel behavior, to formulate strategic transportation policies, and to improve the safety and efficiency of the U.S. transportation system. Data on personal travel trends are needed to examine the reliability, efficiency, capacity, and flexibility of the Nation's transportation system to meet current demands and to accommodate future demand. These data are also needed to assess the feasibility and efficiency of alternative congestion-mitigating technologies (e.g., high-speed rail, magnetically levitated trains, and intelligent vehicle and highway systems); to evaluate the merits of alternative transportation investment programs; and to assess the energy-use and air-quality impacts of various policies. To address these data needs, the U.S. Department of Transportation (USDOT) initiated an effort in 1969 to collect detailed data on personal travel. The 1969 survey was the first Nationwide Personal Transportation Survey (NPTS). The survey was conducted again in 1977, 1983, 1990, 1995, and 2001. Data on daily travel were collected in 1969, 1977, 1983, 1990 and 1995. In 2001, the survey was renamed the National Household Travel Survey (NHTS) and it collected both daily and long-distance trips. The 2001 survey was sponsored by three USDOT agencies: Federal Highway Administration (FHWA), Bureau of Transportation Statistics (BTS), and National Highway Traffic Safety Administration (NHTSA). The primary objective of the survey was to collect trip-based data on the nature and characteristics of personal travel so that the relationships between the characteristics of personal travel and the demographics of the traveler can be established. Commercial and institutional travel were not part of the survey. Due to the survey's design, data in the NHTS survey series were not recommended for estimating travel statistics for categories smaller than the combination of Census division (e.g., New

  15. A PARAMETERIZED GALAXY CATALOG SIMULATOR FOR TESTING CLUSTER FINDING, MASS ESTIMATION, AND PHOTOMETRIC REDSHIFT ESTIMATION IN OPTICAL AND NEAR-INFRARED SURVEYS

    International Nuclear Information System (INIS)

    Song, Jeeseon; Mohr, Joseph J.; Barkhouse, Wayne A.; Rude, Cody; Warren, Michael S.; Dolag, Klaus

    2012-01-01

    We present a galaxy catalog simulator that converts N-body simulations with halo and subhalo catalogs into mock, multiband photometric catalogs. The simulator assigns galaxy properties to each subhalo in a way that reproduces the observed cluster galaxy halo occupation distribution, the radial and mass-dependent variation in fractions of blue galaxies, the luminosity functions in the cluster and the field, and the color-magnitude relation in clusters. Moreover, the evolution of these parameters is tuned to match existing observational constraints. Parameterizing an ensemble of cluster galaxy properties enables us to create mock catalogs with variations in those properties, which in turn allows us to quantify the sensitivity of cluster finding to current observational uncertainties in these properties. Field galaxies are sampled from existing multiband photometric surveys of similar depth. We present an application of the catalog simulator to characterize the selection function and contamination of a galaxy cluster finder that utilizes the cluster red sequence together with galaxy clustering on the sky. We estimate systematic uncertainties in the selection to be at the ≤15% level with current observational constraints on cluster galaxy populations and their evolution. We find the contamination in this cluster finder to be ∼35% to redshift z ∼ 0.6. In addition, we use the mock galaxy catalogs to test the optical mass indicator B gc and a red-sequence redshift estimator. We measure the intrinsic scatter of the B gc -mass relation to be approximately log normal with σ log10M ∼0.25 and we demonstrate photometric redshift accuracies for massive clusters at the ∼3% level out to z ∼ 0.7.

  16. Electrical stimulation therapy for dysphagia: a follow-up survey of USA dysphagia practitioners.

    Science.gov (United States)

    Barikroo, Ali; Carnaby, Giselle; Crary, Michael

    2017-12-01

    The aim of this study was to compare current application, practice patterns, clinical outcomes, and professional attitudes of dysphagia practitioners regarding electrical stimulation (e-stim) therapy with similar data obtained in 2005. A web-based survey was posted on the American Speech-Language-Hearing Association Special Interest Group 13 webpage for 1 month. A total of 271 survey responses were analyzed and descriptively compared with the archived responses from the 2005 survey. Results suggested that e-stim application increased by 47% among dysphagia practitioners over the last 10 years. The frequency of weekly e-stim therapy sessions decreased while the reported total number of treatment sessions increased between the two surveys. Advancement in oral diet was the most commonly reported improvement in both surveys. Overall, reported satisfaction levels of clinicians and patients regarding e-stim therapy decreased. Still, the majority of e-stim practitioners continue to recommend this treatment modality to other dysphagia practitioners. Results from the novel items in the current survey suggested that motor level e-stim (e.g. higher amplitude) is most commonly used during dysphagia therapy with no preferred electrode placement. Furthermore, the majority of clinicians reported high levels of self-confidence regarding their ability to perform e-stim. The results of this survey highlight ongoing changes in application, practice patterns, clinical outcomes, and professional attitudes associated with e-stim therapy among dysphagia practitioners.

  17. Methods to estimate historical daily streamflow for ungaged stream locations in Minnesota

    Science.gov (United States)

    Lorenz, David L.; Ziegeweid, Jeffrey R.

    2016-03-14

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water; however, streamgages cannot be installed at every location where streamflow information is needed. Therefore, methods for estimating streamflow at ungaged stream locations need to be developed. This report presents a statewide study to develop methods to estimate the structure of historical daily streamflow at ungaged stream locations in Minnesota. Historical daily mean streamflow at ungaged locations in Minnesota can be estimated by transferring streamflow data at streamgages to the ungaged location using the QPPQ method. The QPPQ method uses flow-duration curves at an index streamgage, relying on the assumption that exceedance probabilities are equivalent between the index streamgage and the ungaged location, and estimates the flow at the ungaged location using the estimated flow-duration curve. Flow-duration curves at ungaged locations can be estimated using recently developed regression equations that have been incorporated into StreamStats (http://streamstats.usgs.gov/), which is a U.S. Geological Survey Web-based interactive mapping tool that can be used to obtain streamflow statistics, drainage-basin characteristics, and other information for user-selected locations on streams.

  18. The prevalence and incidence of active syphilis in women in Morocco, 1995-2016: Model-based estimation and implications for STI surveillance.

    Science.gov (United States)

    Bennani, Aziza; El-Kettani, Amina; Hançali, Amina; El-Rhilani, Houssine; Alami, Kamal; Youbi, Mohamed; Rowley, Jane; Abu-Raddad, Laith; Smolak, Alex; Taylor, Melanie; Mahiané, Guy; Stover, John; Korenromp, Eline L

    2017-01-01

    Evolving health priorities and resource constraints mean that countries require data on trends in sexually transmitted infections (STI) burden, to inform program planning and resource allocation. We applied the Spectrum STI estimation tool to estimate the prevalence and incidence of active syphilis in adult women in Morocco over 1995 to 2016. The results from the analysis are being used to inform Morocco's national HIV/STI strategy, target setting and program evaluation. Syphilis prevalence levels and trends were fitted through logistic regression to data from surveys in antenatal clinics, women attending family planning clinics and other general adult populations, as available post-1995. Prevalence data were adjusted for diagnostic test performance, and for the contribution of higher-risk populations not sampled in surveys. Incidence was inferred from prevalence by adjusting for the average duration of infection with active syphilis. In 2016, active syphilis prevalence was estimated to be 0.56% in women 15 to 49 years of age (95% confidence interval, CI: 0.3%-1.0%), and around 21,675 (10,612-37,198) new syphilis infections have occurred. The analysis shows a steady decline in prevalence from 1995, when the prevalence was estimated to be 1.8% (1.0-3.5%). The decline was consistent with decreasing prevalences observed in TB patients, fishermen and prisoners followed over 2000-2012 through sentinel surveillance, and with a decline since 2003 in national HIV incidence estimated earlier through independent modelling. Periodic population-based surveys allowed Morocco to estimate syphilis prevalence and incidence trends. This first-ever undertaking engaged and focused national stakeholders, and confirmed the still considerable syphilis burden. The latest survey was done in 2012 and so the trends are relatively uncertain after 2012. From 2017 Morocco plans to implement a system to record data from routine antenatal programmatic screening, which should help update and re

  19. Challenges in Estimating Vaccine Coverage in Refugee and Displaced Populations: Results From Household Surveys in Jordan and Lebanon

    Science.gov (United States)

    Roberton, Timothy; Weiss, William; Doocy, Shannon

    2017-01-01

    Ensuring the sustained immunization of displaced persons is a key objective in humanitarian emergencies. Typically, humanitarian actors measure coverage of single vaccines following an immunization campaign; few measure routine coverage of all vaccines. We undertook household surveys of Syrian refugees in Jordan and Lebanon, outside of camps, using a mix of random and respondent-driven sampling, to measure coverage of all vaccinations included in the host country’s vaccine schedule. We analyzed the results with a critical eye to data limitations and implications for similar studies. Among households with a child aged 12–23 months, 55.1% of respondents in Jordan and 46.6% in Lebanon were able to produce the child’s EPI card. Only 24.5% of Syrian refugee children in Jordan and 12.5% in Lebanon were fully immunized through routine vaccination services (having received from non-campaign sources: measles, polio 1–3, and DPT 1–3 in Jordan and Lebanon, and BCG in Jordan). Respondents in Jordan (33.5%) and Lebanon (40.1%) reported difficulties obtaining child vaccinations. Our estimated immunization rates were lower than expected and raise serious concerns about gaps in vaccine coverage among Syrian refugees. Although our estimates likely under-represent true coverage, given the additional benefit of campaigns (not captured in our surveys), there is a clear need to increase awareness, accessibility, and uptake of immunization services. Current methods to measure vaccine coverage in refugee and displaced populations have limitations. To better understand health needs in such groups, we need research on: validity of recall methods, links between campaigns and routine immunization programs, and improved sampling of hard-to-reach populations. PMID:28805672

  20. Social networking versus facebook advertising to recruit survey respondents: a quasi-experimental study.

    Science.gov (United States)

    Gilligan, Conor; Kypri, Kypros; Bourke, Jesse

    2014-09-17

    Increasingly, social contact and knowledge of other people's attitudes and behavior are mediated by online social media such as Facebook. The main research to which this recruitment study pertains investigates the influence of parents on adolescent alcohol consumption. Given the pervasiveness of online social media use, Facebook may be an effective means of recruitment and intervention delivery. The objective of the study was to determine the efficacy of study recruitment via social networks versus paid advertising on Facebook. We conducted a quasi-experimental sequential trial with response rate as the outcome, and estimates of cost-effectiveness. The target population was parents of 13-17 year old children attending high schools in the Hunter region of New South Wales, Australia. Recruitment occurred via: method (1) social recruitment using Facebook, email-based, social networks, and media coverage followed by method (2) Facebook advertising. Using a range of online and other social network approaches only: method (1) 74 parents were recruited to complete a survey over eight months, costing AUD58.70 per completed survey. After Facebook advertising: method (2) 204 parents completed the survey over four weeks, costing AUD5.94 per completed survey. Participants were representative of the parents recruited from the region's schools using standard mail and email. Facebook advertising is a cost-effective means of recruiting parents, a group difficult to reach by other methods.

  1. Advantages and limitations of web-based surveys: evidence from a child mental health survey.

    Science.gov (United States)

    Heiervang, Einar; Goodman, Robert

    2011-01-01

    Web-based surveys may have advantages related to the speed and cost of data collection as well as data quality. However, they may be biased by low and selective participation. We predicted that such biases would distort point-estimates such as average symptom level or prevalence but not patterns of associations with putative risk-factors. A structured psychiatric interview was administered to parents in two successive surveys of child mental health. In 2003, parents were interviewed face-to-face, whereas in 2006 they completed the interview online. In both surveys, interviews were preceded by paper questionnaires covering child and family characteristics. The rate of parents logging onto the web site was comparable to the response rate for face-to-face interviews, but the rate of full response (completing all sections of the interview) was much lower for web-based interviews. Full response was less frequent for non-traditional families, immigrant parents, and less educated parents. Participation bias affected point estimates of psychopathology but had little effect on associations with putative risk factors. The time and cost of full web-based interviews was only a quarter of that for face-to-face interviews. Web-based surveys may be performed faster and at lower cost than more traditional approaches with personal interviews. Selective participation seems a particular threat to point estimates of psychopathology, while patterns of associations are more robust.

  2. Explaining discrepancies in reproductive health indicators from population-based surveys and exit surveys: a case from Rwanda.

    Science.gov (United States)

    Meekers, D; Ogada, E A

    2001-06-01

    Reproductive health programmes often need exit surveys and population-based surveys for monitoring and evaluation. This study investigates why such studies produce discrepant estimates of condom use, sexual behaviour and condom brand knowledge, and discusses the implications for future use of exit surveys for programme monitoring. Logistic regression is used to explain differences between a household survey of 1295 persons and an exit survey among a random sample of 2550 consumers at retail outlets in RWANDA: Discrepancies in ever use of condoms and risky sexual behaviours are due to differences in socioeconomic status of the two samples. After controls, exit surveys at most outlet types have the same results as the household survey. Only exit surveys at bars, nightclubs and hotels yield significantly different estimates. However, the above-average knowledge of Prudence Plus condoms in the exit interviews is not attributable to socioeconomic or demographic variables, most likely because respondents have seen the product at the outlets. Information about condom use and sexual behaviour obtained from exit surveys appears as accurate as that obtained through household surveys. Nevertheless, exit surveys must be used cautiously. Because exit surveys may include wealthier and better-educated respondents, they are not representative of the general population. The composition of exit survey samples should be validated through existing household surveys. Comparisons across survey types are generally unadvisable, unless they control for sample differences. When generalizing to the population at large is not needed (e.g. for studies aimed at identifying the characteristics and behaviour of users of particular products or services), exit surveys can provide an appropriate alternative to household surveys.

  3. High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya.

    Science.gov (United States)

    Jia, Peng; Anderson, John D; Leitner, Michael; Rheingans, Richard

    2016-01-01

    Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals. The surveyed households in 397 clusters from 2008-2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation. The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing. There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally where there are areas of greater needs when

  4. High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya.

    Directory of Open Access Journals (Sweden)

    Peng Jia

    Full Text Available Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals.The surveyed households in 397 clusters from 2008-2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation.The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing.There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally where there are areas of

  5. Measuring the Willingness to Pay for Tap Water Quality Improvements: Results of a Contingent Valuation Survey in Pusan

    Directory of Open Access Journals (Sweden)

    Chang-Seob Kim

    2013-10-01

    Full Text Available With increasing concern regarding health, people have developed an interest in the safety of drinking water. In this study, we attempt to measure the economic benefits of tap water quality improvement through a case study on Pusan, the second largest city in Korea. To this end, we use a scenario that the government plans to implement a new project of improving water quality and apply the contingent valuation (CV method. A one-and-one-half bounded dichotomous choice question (OOHBDC format is employed to reduce the potential for response bias in multiple-bound formats such as the double-bound model, while maintaining much of the efficiency. Moreover, we employ the spike model to deal with zero willingness to pay (WTP responses from the OOHBDC CV survey. The CV survey of 400 randomly selected households was rigorously designed to comply with the guidelines for best-practice CV studies using person-to-person interviews. From the spike OOHBDC CV model, the mean WTP for the improvement was estimated to be KRW 2,124 (USD 2.2, on average, per household, per month. The value amounts to 36.6% of monthly water bill and 20.2% of production costs of water. The conventional OOHBDC model produces statistically insignificant mean WTP estimate and even negative value, but the OOHBDC spike model gives us statistically significant mean WTP estimate and fitted our data well. The WTP value to Pusan residents can be computed to be KRW 31.2 billion (USD 32.1 million per year.

  6. The Meth Project and Teen Meth Use: New Estimates from the National and State Youth Risk Behavior Surveys.

    Science.gov (United States)

    Anderson, D Mark; Elsea, David

    2015-12-01

    In this note, we use data from the national and state Youth Risk Behavior Surveys for the period 1999 through 2011 to estimate the relationship between the Meth Project, an anti-methamphetamine advertising campaign, and meth use among high school students. During this period, a total of eight states adopted anti-meth advertising campaigns. After accounting for pre-existing downward trends in meth use, we find little evidence that the campaign curbed meth use in the full sample. We do find, however, some evidence that the Meth Project may have decreased meth use among White high school students. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Accessing camera trap survey feasibility for estimating Blastocerus dichotomus (Cetartiodactyla, Cervidae demographic parameters

    Directory of Open Access Journals (Sweden)

    Pedro Henrique F. Peres

    2017-11-01

    Full Text Available ABSTRACT Demographic information is the basis for evaluating and planning conservation strategies for an endangered species. However, in numerous situations there are methodological or financial limitations to obtain such information for some species. The marsh deer, an endangered Neotropical cervid, is a challenging species to obtain biological information. To help achieve such aims, the study evaluated the applicability of camera traps to obtain demographic information on the marsh deer compared to the traditional aerial census method. Fourteen camera traps were installed for three months on the Capão da Cruz floodplain, in state of São Paulo, and ten helicopter flyovers were made along a 13-kilometer trajectory to detect resident marsh deer. In addition to counting deer, the study aimed to identify the sex, age group and individual identification of the antlered males recorded. Population estimates were performed using the capture-mark-recapture method with the camera trap data and by the distance sampling method for aerial observation data. The costs and field efforts expended for both methodologies were calculated and compared. Twenty independent photographic records and 42 sightings were obtained and generated estimates of 0.98 and 1.06 ind/km², respectively. In contrast to the aerial census, camera traps allowed us to individually identify branch-antlered males, determine the sex ratio and detect fawns in the population. The cost of camera traps was 78% lower but required 20 times more field effort. Our analysis indicates that camera traps present a superior cost-benefit ratio compared to aerial surveys, since they are more informative, cheaper and offer simpler logistics. Their application extends the possibilities of studying a greater number of populations in a long-term monitoring.

  8. Telemedicine in a pediatric headache clinic: A prospective survey.

    Science.gov (United States)

    Qubty, William; Patniyot, Irene; Gelfand, Amy

    2018-05-08

    The aim of this prospective study was to survey our patients about their experience with our clinic's telemedicine program to better understand telemedicine's utility for families, and to improve patient satisfaction and ultimately patient care. This was a prospective survey study of patients and their families who had a routine telemedicine follow-up visit with the University of California San Francisco Pediatric Headache Program. The survey was administered to patients and a parent(s) following their telemedicine visit. Fifty-one of 69 surveys (74%) were completed. All (51/51) patients and families thought that (1) telemedicine was more convenient compared to a clinic visit, (2) telemedicine caused less disruption of their daily routine, and (3) they would choose to do telemedicine again. The mean round-trip travel time from home to clinic was 6.8 hours (SD ± 8.6 hours). All participants thought telemedicine was more cost-effective than a clinic visit. Parents estimated that participating in a telemedicine visit instead of a clinic appointment saved them on average $486. This prospective, pediatric headache telemedicine study shows that telemedicine is convenient, perceived to be cost-effective, and patient-centered. Providing the option of telemedicine for routine pediatric headache follow-up visits results in high patient and family satisfaction. © 2018 American Academy of Neurology.

  9. Estimation of the gender pay gap in London and the UK - an econometric approach

    OpenAIRE

    Margarethe Theseira; Leticia Veruete-McKay

    2005-01-01

    We estimate the gender pay gap in London and the UK based on Labour Force Survey data 2002/03. Our approach decomposes the mean average wages of men and women into two parts (a) Differences in individual and job characteristics between men and women (such as age, number of children, qualification, ethnicity, region of residence, working in the public or private sector, working part-time or full-time, industry, occupation and size of company) (b) Unequal treatment and/or unexplained factors. S...

  10. Analytical estimation of emission zone mean position and width in organic light-emitting diodes from emission pattern image-source interference fringes

    International Nuclear Information System (INIS)

    Epstein, Ariel; Tessler, Nir; Einziger, Pinchas D.; Roberts, Matthew

    2014-01-01

    We present an analytical method for evaluating the first and second moments of the effective exciton spatial distribution in organic light-emitting diodes (OLED) from measured emission patterns. Specifically, the suggested algorithm estimates the emission zone mean position and width, respectively, from two distinct features of the pattern produced by interference between the emission sources and their images (induced by the reflective cathode): the angles in which interference extrema are observed, and the prominence of interference fringes. The relations between these parameters are derived rigorously for a general OLED structure, indicating that extrema angles are related to the mean position of the radiating excitons via Bragg's condition, and the spatial broadening is related to the attenuation of the image-source interference prominence due to an averaging effect. The method is applied successfully both on simulated emission patterns and on experimental data, exhibiting a very good agreement with the results obtained by numerical techniques. We investigate the method performance in detail, showing that it is capable of producing accurate estimations for a wide range of source-cathode separation distances, provided that the measured spectral interval is large enough; guidelines for achieving reliable evaluations are deduced from these results as well. As opposed to numerical fitting tools employed to perform similar tasks to date, our approximate method explicitly utilizes physical intuition and requires far less computational effort (no fitting is involved). Hence, applications that do not require highly resolved estimations, e.g., preliminary design and production-line verification, can benefit substantially from the analytical algorithm, when applicable. This introduces a novel set of efficient tools for OLED engineering, highly important in the view of the crucial role the exciton distribution plays in determining the device performance.

  11. Analytical estimation of emission zone mean position and width in organic light-emitting diodes from emission pattern image-source interference fringes

    Energy Technology Data Exchange (ETDEWEB)

    Epstein, Ariel, E-mail: ariel.epstein@utoronto.ca; Tessler, Nir, E-mail: nir@ee.technion.ac.il; Einziger, Pinchas D. [Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Roberts, Matthew, E-mail: mroberts@cdtltd.co.uk [Cambridge Display Technology Ltd, Building 2020, Cambourne Business Park, Cambourne, Cambridgeshire CB23 6DW (United Kingdom)

    2014-06-14

    We present an analytical method for evaluating the first and second moments of the effective exciton spatial distribution in organic light-emitting diodes (OLED) from measured emission patterns. Specifically, the suggested algorithm estimates the emission zone mean position and width, respectively, from two distinct features of the pattern produced by interference between the emission sources and their images (induced by the reflective cathode): the angles in which interference extrema are observed, and the prominence of interference fringes. The relations between these parameters are derived rigorously for a general OLED structure, indicating that extrema angles are related to the mean position of the radiating excitons via Bragg's condition, and the spatial broadening is related to the attenuation of the image-source interference prominence due to an averaging effect. The method is applied successfully both on simulated emission patterns and on experimental data, exhibiting a very good agreement with the results obtained by numerical techniques. We investigate the method performance in detail, showing that it is capable of producing accurate estimations for a wide range of source-cathode separation distances, provided that the measured spectral interval is large enough; guidelines for achieving reliable evaluations are deduced from these results as well. As opposed to numerical fitting tools employed to perform similar tasks to date, our approximate method explicitly utilizes physical intuition and requires far less computational effort (no fitting is involved). Hence, applications that do not require highly resolved estimations, e.g., preliminary design and production-line verification, can benefit substantially from the analytical algorithm, when applicable. This introduces a novel set of efficient tools for OLED engineering, highly important in the view of the crucial role the exciton distribution plays in determining the device performance.

  12. Assessing Error Correlations in Remote Sensing-Based Estimates of Forest Attributes for Improved Composite Estimation

    Directory of Open Access Journals (Sweden)

    Sarah Ehlers

    2018-04-01

    Full Text Available Today, non-expensive remote sensing (RS data from different sensors and platforms can be obtained at short intervals and be used for assessing several kinds of forest characteristics at the level of plots, stands and landscapes. Methods such as composite estimation and data assimilation can be used for combining the different sources of information to obtain up-to-date and precise estimates of the characteristics of interest. In composite estimation a standard procedure is to assign weights to the different individual estimates inversely proportional to their variance. However, in case the estimates are correlated, the correlations must be considered in assigning weights or otherwise a composite estimator may be inefficient and its variance be underestimated. In this study we assessed the correlation of plot level estimates of forest characteristics from different RS datasets, between assessments using the same type of sensor as well as across different sensors. The RS data evaluated were SPOT-5 multispectral data, 3D airborne laser scanning data, and TanDEM-X interferometric radar data. Studies were made for plot level mean diameter, mean height, and growing stock volume. All data were acquired from a test site dominated by coniferous forest in southern Sweden. We found that the correlation between plot level estimates based on the same type of RS data were positive and strong, whereas the correlations between estimates using different sources of RS data were not as strong, and weaker for mean height than for mean diameter and volume. The implications of such correlations in composite estimation are demonstrated and it is discussed how correlations may affect results from data assimilation procedures.

  13. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  14. Local survey of the distribution of industrial melanic forms in the moth Biston betularia and estimates of the selective values of these in an industrial environment

    Energy Technology Data Exchange (ETDEWEB)

    Clarke, C A; Sheppard, P M

    1966-01-01

    A survey has shown that there is a rapid decline in the frequency of the industrial melanic carbonaria of the moth Biston betularia from a value of about 97% in Liverpool to less than 10% 50 miles to the west in North Wales. The decline in the frequency of the intermediate phenotype insularia in this area, controlled by an allelomorph at the same locus, is from about 14% on the Wirral (no reliable frequency is available for Liverpool) to about 4% 30 miles to the west. Experiments using dead moths placed in life-like positions on tree trunks at Caldy and in Liverpool confirmed that carbonaria is better camouflaged on the blackened tree trunks of industrial areas. Estimates of the selective disadvantage of the typical form in Liverpool, using data from the survey and these experiments, together with a variety of assumptions, indicate values of the order of 60%, which is somewhat higher than previous estimates. At Caldy the typical form appears to have been at a disadvantage of about 50% prior to the introduction of the smokeless zones and is now at about a 20% disadvantage, using assumptions similar to those in the Liverpool estimates. Although these estimates are subject to considerable error, there is little doubt that they reflect the correct order of magnitude of the relative selective values. 8 references, 1 figure, 4 tables.

  15. Mortality and kidnapping estimates for the Yazidi population in the area of Mount Sinjar, Iraq, in August 2014: A retrospective household survey.

    Science.gov (United States)

    Cetorelli, Valeria; Sasson, Isaac; Shabila, Nazar; Burnham, Gilbert

    2017-05-01

    In August 2014, the so-called Islamic State of Iraq and Syria (ISIS) attacked the Yazidi religious minority living in the area of Mount Sinjar in Nineveh governorate, Iraq. We conducted a retrospective household survey to estimate the number and demographic profile of Yazidis killed and kidnapped. The survey covered the displaced Yazidi population from Sinjar residing in camps in the Kurdistan Region of Iraq. Fieldwork took place between 4 November and 25 December, 2015. A systematic random sample of 1,300 in-camp households were interviewed about the current household composition and any killings and kidnappings of household members by ISIS. Of the 1,300 interviewed households, 988 were Yazidi from Sinjar. Yazidi households contained 6,572 living residents at the time of the survey; 43 killings and 83 kidnappings of household members were reported. We calculated the probability of being killed and kidnapped by dividing the number of reported killings and kidnappings by the number of sampled Yazidis at risk, adjusting for sampling design. To obtain the overall toll of killings and kidnappings, those probabilities were multiplied by the total Yazidi population living in Sinjar at the time of the ISIS attack, estimated at roughly 400,000 by the United Nations and Kurdish officials. The demographic profile of those killed and kidnapped was examined, distinguishing between children and adults and females and males. We estimated that 2.5% of the Yazidi population was either killed or kidnapped over the course of a few days in August 2014, amounting to 9,900 (95% CI 7,000-13,900) people in total. An estimated 3,100 (95% CI 2,100-4,400) Yazidis were killed, with nearly half of them executed-either shot, beheaded, or burned alive-while the rest died on Mount Sinjar from starvation, dehydration, or injuries during the ISIS siege. The estimated number kidnapped is 6,800 (95% CI 4,200-10,800). Escapees recounted the abuses they had suffered, including forced religious

  16. Mortality and kidnapping estimates for the Yazidi population in the area of Mount Sinjar, Iraq, in August 2014: A retrospective household survey.

    Directory of Open Access Journals (Sweden)

    Valeria Cetorelli

    2017-05-01

    Full Text Available In August 2014, the so-called Islamic State of Iraq and Syria (ISIS attacked the Yazidi religious minority living in the area of Mount Sinjar in Nineveh governorate, Iraq. We conducted a retrospective household survey to estimate the number and demographic profile of Yazidis killed and kidnapped.The survey covered the displaced Yazidi population from Sinjar residing in camps in the Kurdistan Region of Iraq. Fieldwork took place between 4 November and 25 December, 2015. A systematic random sample of 1,300 in-camp households were interviewed about the current household composition and any killings and kidnappings of household members by ISIS. Of the 1,300 interviewed households, 988 were Yazidi from Sinjar. Yazidi households contained 6,572 living residents at the time of the survey; 43 killings and 83 kidnappings of household members were reported. We calculated the probability of being killed and kidnapped by dividing the number of reported killings and kidnappings by the number of sampled Yazidis at risk, adjusting for sampling design. To obtain the overall toll of killings and kidnappings, those probabilities were multiplied by the total Yazidi population living in Sinjar at the time of the ISIS attack, estimated at roughly 400,000 by the United Nations and Kurdish officials. The demographic profile of those killed and kidnapped was examined, distinguishing between children and adults and females and males. We estimated that 2.5% of the Yazidi population was either killed or kidnapped over the course of a few days in August 2014, amounting to 9,900 (95% CI 7,000-13,900 people in total. An estimated 3,100 (95% CI 2,100-4,400 Yazidis were killed, with nearly half of them executed-either shot, beheaded, or burned alive-while the rest died on Mount Sinjar from starvation, dehydration, or injuries during the ISIS siege. The estimated number kidnapped is 6,800 (95% CI 4,200-10,800. Escapees recounted the abuses they had suffered, including forced

  17. Proportionate-type normalized last mean square algorithms

    CERN Document Server

    Wagner, Kevin

    2013-01-01

    The topic of this book is proportionate-type normalized least mean squares (PtNLMS) adaptive filtering algorithms, which attempt to estimate an unknown impulse response by adaptively giving gains proportionate to an estimate of the impulse response and the current measured error. These algorithms offer low computational complexity and fast convergence times for sparse impulse responses in network and acoustic echo cancellation applications. New PtNLMS algorithms are developed by choosing gains that optimize user-defined criteria, such as mean square error, at all times. PtNLMS algorithms ar

  18. Small area estimation of proportions with different levels of auxiliary data.

    Science.gov (United States)

    Chandra, Hukum; Kumar, Sushil; Aditya, Kaustav

    2018-03-01

    Binary data are often of interest in many small areas of applications. The use of standard small area estimation methods based on linear mixed models becomes problematic for such data. An empirical plug-in predictor (EPP) under a unit-level generalized linear mixed model with logit link function is often used for the estimation of a small area proportion. However, this EPP requires the availability of unit-level population information for auxiliary data that may not be always accessible. As a consequence, in many practical situations, this EPP approach cannot be applied. Based on the level of auxiliary information available, different small area predictors for estimation of proportions are proposed. Analytic and bootstrap approaches to estimating the mean squared error of the proposed small area predictors are also developed. Monte Carlo simulations based on both simulated and real data show that the proposed small area predictors work well for generating the small area estimates of proportions and represent a practical alternative to the above approach. The developed predictor is applied to generate estimates of the proportions of indebted farm households at district-level using debt investment survey data from India. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall

    Science.gov (United States)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-11-01

    The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.

  20. Measuring core inflation in India: An asymmetric trimmed mean approach

    Directory of Open Access Journals (Sweden)

    Naresh Kumar Sharma

    2015-12-01

    Full Text Available The paper seeks to obtain an optimal asymmetric trimmed mean-based core inflation measure in the class of trimmed mean measures when the distribution of price changes is leptokurtic and skewed to the right for any given period. Several estimators based on asymmetric trimmed mean approach are constructed and estimates generated by use of these estimators are evaluated on the basis of certain established empirical criteria. The paper also provides the method of trimmed mean expression “in terms of percentile score.” This study uses 69 monthly price indices which are constituent components of Wholesale Price Index for the period, April 1994 to April 2009, with 1993–1994 as the base year. Results of the study indicate that an optimally trimmed estimator is found when we trim 29.5% from the left-hand tail and 20.5% from the right-hand tail of the distribution of price changes.