WorldWideScience

Sample records for survey estimated means

  1. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  2. Statistical properties of mean stand biomass estimators in a LIDAR-based double sampling forest survey design.

    Science.gov (United States)

    H.E. Anderson; J. Breidenbach

    2007-01-01

    Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...

  3. Beyond the mean estimate: a quantile regression analysis of inequalities in educational outcomes using INVALSI survey data

    Directory of Open Access Journals (Sweden)

    Antonella Costanzo

    2017-09-01

    Full Text Available Abstract The number of studies addressing issues of inequality in educational outcomes using cognitive achievement tests and variables from large-scale assessment data has increased. Here the value of using a quantile regression approach is compared with a classical regression analysis approach to study the relationships between educational outcomes and likely predictor variables. Italian primary school data from INVALSI large-scale assessments were analyzed using both quantile and standard regression approaches. Mathematics and reading scores were regressed on students' characteristics and geographical variables selected for their theoretical and policy relevance. The results demonstrated that, in Italy, the role of gender and immigrant status varied across the entire conditional distribution of students’ performance. Analogous results emerged pertaining to the difference in students’ performance across Italian geographic areas. These findings suggest that quantile regression analysis is a useful tool to explore the determinants and mechanisms of inequality in educational outcomes. A proper interpretation of quantile estimates may enable teachers to identify effective learning activities and help policymakers to develop tailored programs that increase equity in education.

  4. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  5. Agnostic Estimation of Mean and Covariance

    OpenAIRE

    Lai, Kevin A.; Rao, Anup B.; Vempala, Santosh

    2016-01-01

    We consider the problem of estimating the mean and covariance of a distribution from iid samples in $\\mathbb{R}^n$, in the presence of an $\\eta$ fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when $\\eta$ fraction of data is adversarially corrupted, agn...

  6. System for estimation of mean active bone marrow dose

    International Nuclear Information System (INIS)

    Ellis, R.E.; Healy, M.J.R.; Shleien, B.; Tucker, T.

    1975-09-01

    The exposure measurements, model and computer program for estimation of mean active bone marrow doses formerly employed in the 1962 British Survey of x-ray doses and proposed for application to x-ray exposure information obtained in the U.S. Public Health Service's X-Ray Exposure Studies (1966 and 1973) are described and evaluated. The method described is feasible for use to determine the mean active bone marrow doses to adults for examinations having a skin to source distance of 80 cm or less. For a greater SSD, as for example in chest x rays, a small correction in the calculation dose can be made

  7. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  8. Bayesian Simultaneous Estimation for Means in k Sample Problems

    OpenAIRE

    Imai, Ryo; Kubokawa, Tatsuya; Ghosh, Malay

    2017-01-01

    This paper is concerned with the simultaneous estimation of k population means when one suspects that the k means are nearly equal. As an alternative to the preliminary test estimator based on the test statistics for testing hypothesis of equal means, we derive Bayesian and minimax estimators which shrink individual sample means toward a pooled mean estimator given under the hypothesis. Interestingly, it is shown that both the preliminary test estimator and the Bayesian minimax shrinkage esti...

  9. A modified procedure for estimating the population mean in two ...

    African Journals Online (AJOL)

    A modified procedure for estimating the population mean in two-occasion successive samplings. Housila Prasad Singh, Suryal Kant Pal. Abstract. This paper addresses the problem of estimating the current population mean in two occasion successive sampling. Utilizing the readily available information on two auxiliary ...

  10. Stereological estimation of nuclear mean volume in invasive meningiomas

    DEFF Research Database (Denmark)

    Madsen, C; Schrøder, H D

    1996-01-01

    A stereological estimation of nuclear mean volume in bone and brain invasive meningiomas was made. For comparison the nuclear mean volume of benign meningiomas was estimated. The aim was to investigate whether this method could discriminate between these groups. We found that the nuclear mean...... volume in the bone and brain invasive meningiomas was larger than in the benign tumors. The difference was significant and moreover it was seen that there was no overlap between the two groups. In the bone invasive meningiomas the nuclear mean volume appeared to be larger inside than outside the bone....... No significant difference in nuclear mean volume was found between brain and bone invasive meningiomas. The results demonstrate that invasive meningiomas differ from benign meningiomas by an objective stereological estimation of nuclear mean volume (p

  11. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    Science.gov (United States)

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  12. Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.

    Science.gov (United States)

    Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya

    2018-05-05

    This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.

  13. Mean value estimates of the error terms of Lehmer problem

    Indian Academy of Sciences (India)

    Mean value estimates of the error terms of Lehmer problem. DONGMEI REN1 and YAMING ... For further properties of N(a,p) in [6], he studied the mean square value of the error term. E(a, p) = N(a,p) − 1. 2 (p − 1) ..... [1] Apostol Tom M, Introduction to Analytic Number Theory (New York: Springer-Verlag). (1976). [2] Guy R K ...

  14. Robust estimators based on generalization of trimmed mean

    Czech Academy of Sciences Publication Activity Database

    Adam, Lukáš; Bejda, P.

    (2018) ISSN 0361-0918 Institutional support: RVO:67985556 Keywords : Breakdown point * Estimators * Geometric median * Location * Trimmed mean Subject RIV: BA - General Mathematics Impact factor: 0.457, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/adam-0481224.pdf

  15. Eigenvalue estimates for submanifolds with bounded f-mean curvature

    Indian Academy of Sciences (India)

    GUANGYUE HUANG

    1College of Mathematics and Information Science, Henan Normal University,. Xinxiang 453007 ... submanifolds in a hyperbolic space with the norm of their mean curvature vector bounded above by a constant. ..... [2] Batista M, Cavalcante M P and Pyo J, Some isoperimetric inequalities and eigenvalue estimates in ...

  16. Estimation of mean-reverting oil prices: a laboratory approach

    International Nuclear Information System (INIS)

    Bjerksund, P.; Stensland, G.

    1993-12-01

    Many economic decision support tools developed for the oil industry are based on the future oil price dynamics being represented by some specified stochastic process. To meet the demand for necessary data, much effort is allocated to parameter estimation based on historical oil price time series. The approach in this paper is to implement a complex future oil market model, and to condense the information from the model to parameter estimates for the future oil price. In particular, we use the Lensberg and Rasmussen stochastic dynamic oil market model to generate a large set of possible future oil price paths. Given the hypothesis that the future oil price is generated by a mean-reverting Ornstein-Uhlenbeck process, we obtain parameter estimates by a maximum likelihood procedure. We find a substantial degree of mean-reversion in the future oil price, which in some of our decision examples leads to an almost negligible value of flexibility. 12 refs., 2 figs., 3 tabs

  17. The application of mean field theory to image motion estimation.

    Science.gov (United States)

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  18. Nano-hardness estimation by means of Ar+ ion etching

    International Nuclear Information System (INIS)

    Bartali, R.; Micheli, V.; Gottardi, G.; Vaccari, A.; Safeen, M.K.; Laidani, N.

    2015-01-01

    When the coatings are in nano-scale, the mechanical properties cannot be easily estimated by means of the conventional methods due to: tip shape, instrument resolution, roughness, and substrate effect. In this paper, we proposed a semi-empirical method to evaluate the mechanical properties of thin films based on the sputtering rate induced by bombardment of Ar + ion. The Ar + ion bombardment was induced by ion gun implemented in Auger electron spectroscopy (AES). This procedure has been applied on a series of coatings with different structure (carbon films) and a series of coating with a different density (ZnO thin films). The coatings were deposited on Silicon substrates by RF sputtering plasma. The results show that, as predicted by Insepov et al., there is a correlation between hardness and sputtering rate. Using reference materials and a simple power law equation the estimation of the nano-hardness using an Ar + beam is possible. - Highlights: • ZnO film and Carbon films were grown on silicon using PVD. • The growth temperature was room temperature. • The hardness of the coatings was estimated by means of nanoindentation. • Evaluation of resistance of materials to the mechanical damage induced by an Ar + ion gun (AES). • The hardness have been studied and a power law with the erosion rate has been found

  19. Means of surveying contaminated areas resulting from overseas nuclear accidents

    International Nuclear Information System (INIS)

    Looney, J.H.H.; Thorne, M.C.; Dickson, D.M.J.

    1989-09-01

    The Chernobyl accident is briefly reviewed as a useful basis to examine some of the considerations related to the design of surveys. The plans and procedures of key European and North American countries are reviewed, as well as the plans and capabilities of UK facilities and government agencies. The survey design incorporates the concepts of land use category, topography climate, etc. and discusses the spatial and temporal scale requirements. Use of a Geographic Information System is recommended to co-ordinate the data. Models address the requirement to detect an annual effective dose equivalent of 0.5 mSv to an individual in the first year following the accident. The equipment requirements are based on transit-type vans, each, preferably, with one or two gamma spectrometers, MCA's and ancillary equipment, with three teams of two men. This unit could survey about 150 km 2 within a larger area in 3 days. The cost per survey team is estimated to be Pound 60,000 - Pound 80,000 in the first year, with annual costs of Pound 20-23,000. (author)

  20. Pareto-optimal estimates that constrain mean California precipitation change

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  1. Mean Field Games Models-A Brief Survey

    KAUST Repository

    Gomes, Diogo A.; Saú de, Joã o

    2013-01-01

    The mean-field framework was developed to study systems with an infinite number of rational agents in competition, which arise naturally in many applications. The systematic study of these problems was started, in the mathematical community by Lasry and Lions, and independently around the same time in the engineering community by P. Caines, Minyi Huang, and Roland Malhamé. Since these seminal contributions, the research in mean-field games has grown exponentially, and in this paper we present a brief survey of mean-field models as well as recent results and techniques. In the first part of this paper, we study reduced mean-field games, that is, mean-field games, which are written as a system of a Hamilton-Jacobi equation and a transport or Fokker-Planck equation. We start by the derivation of the models and by describing some of the existence results available in the literature. Then we discuss the uniqueness of a solution and propose a definition of relaxed solution for mean-field games that allows to establish uniqueness under minimal regularity hypothesis. A special class of mean-field games that we discuss in some detail is equivalent to the Euler-Lagrange equation of suitable functionals. We present in detail various additional examples, including extensions to population dynamics models. This section ends with a brief overview of the random variables point of view as well as some applications to extended mean-field games models. These extended models arise in problems where the costs incurred by the agents depend not only on the distribution of the other agents, but also on their actions. The second part of the paper concerns mean-field games in master form. These mean-field games can be modeled as a partial differential equation in an infinite dimensional space. We discuss both deterministic models as well as problems where the agents are correlated. We end the paper with a mean-field model for price impact. © 2013 Springer Science+Business Media New York.

  2. Mean Field Games Models-A Brief Survey

    KAUST Repository

    Gomes, Diogo A.

    2013-11-20

    The mean-field framework was developed to study systems with an infinite number of rational agents in competition, which arise naturally in many applications. The systematic study of these problems was started, in the mathematical community by Lasry and Lions, and independently around the same time in the engineering community by P. Caines, Minyi Huang, and Roland Malhamé. Since these seminal contributions, the research in mean-field games has grown exponentially, and in this paper we present a brief survey of mean-field models as well as recent results and techniques. In the first part of this paper, we study reduced mean-field games, that is, mean-field games, which are written as a system of a Hamilton-Jacobi equation and a transport or Fokker-Planck equation. We start by the derivation of the models and by describing some of the existence results available in the literature. Then we discuss the uniqueness of a solution and propose a definition of relaxed solution for mean-field games that allows to establish uniqueness under minimal regularity hypothesis. A special class of mean-field games that we discuss in some detail is equivalent to the Euler-Lagrange equation of suitable functionals. We present in detail various additional examples, including extensions to population dynamics models. This section ends with a brief overview of the random variables point of view as well as some applications to extended mean-field games models. These extended models arise in problems where the costs incurred by the agents depend not only on the distribution of the other agents, but also on their actions. The second part of the paper concerns mean-field games in master form. These mean-field games can be modeled as a partial differential equation in an infinite dimensional space. We discuss both deterministic models as well as problems where the agents are correlated. We end the paper with a mean-field model for price impact. © 2013 Springer Science+Business Media New York.

  3. Estimation of unknown nuclear masses by means of the generalized mass relations. Pt. 3

    International Nuclear Information System (INIS)

    Popa, S.M.

    1980-01-01

    A survey of the estimations of the unknown nuclear masses by means of the generalized mass relations is presented. One discusses the new hypotheses supplementing the original general Garvey-Kelson scheme, reviewing the generalized mass relations and formulae, according to the present status of this new formalism. A critical discussions is given of the reliability of these new Garvey-Kelson type extrapolation procedures. (author)

  4. Estimation of global solar radiation by means of sunshine duration

    Energy Technology Data Exchange (ETDEWEB)

    Luis, Mazorra Aguiar; Felipe, Diaz Reyes [Electrical Engineering Dept., Las Palmas de Gran Canaria Univ. (U.L.P.G.C.), Campus Univ. Tafira (Spain); Pilar, Navarro Rivero [Canary Islands Technological Inst. (I.T.C.), Gran Canaria (Spain)

    2008-07-01

    This paper analyses the relationship between global solar irradiation and sunshine duration with different estimation models for the island of Gran Canaria (Spain). These parameters were taken from six measurement stations around the Island, and selected for their reliability and the long period of time they covered. All data used in this paper were handed over by the Canary Islands Technological Institute (I.T.C.). As a first approach, it was decided to study the Angstrom lineal model. In order to improve the knowledge on solar resources, a Typical Meteorological Year (TMY) was created from all daily data. TMY shows differences between southern and northern locations, where Trade Winds generate clouds during the summer months. TMY resumes a data bank much longer than a year in duration, generating the characteristics for a year series of each location, for both irradiation and sunshine duration. To create the TMY, weighted means have been used to smooth high or low values. At first, Angstrom lineal model has been used to estimate solar global irradiation from sunshine duration values, using TMY. But the lineal model didn't reproduce satisfactory results when used to obtain global solar radiation from all daily sunshine duration data. For this reason, different models based in both parameters were used. The parameters estimation of this model was achieved both from TMY daily and monthly series and from all daily data for every location. Because of the weather stability all over the year in the Island, most of the daily data are concentrated in a close range, occasioning a deviation in the lineal equations. To avoid this deviation it was proposed to consider a limit condition data, taking into account values out of the main cloud of data. Additionally, different models were proposed (quadratic, cubic, logarithmic and exponential) to make a regression from all daily data. The best results were obtained with the exponential model proposed in this paper. The

  5. Estimating trends in the global mean temperature record

    Science.gov (United States)

    Poppick, Andrew; Moyer, Elisabeth J.; Stein, Michael L.

    2017-06-01

    Given uncertainties in physical theory and numerical climate simulations, the historical temperature record is often used as a source of empirical information about climate change. Many historical trend analyses appear to de-emphasize physical and statistical assumptions: examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for internal variability in nonparametric rather than parametric ways. However, given a limited data record and the presence of internal variability, estimating radiatively forced temperature trends in the historical record necessarily requires some assumptions. Ostensibly empirical methods can also involve an inherent conflict in assumptions: they require data records that are short enough for naive trend models to be applicable, but long enough for long-timescale internal variability to be accounted for. In the context of global mean temperatures, empirical methods that appear to de-emphasize assumptions can therefore produce misleading inferences, because the trend over the twentieth century is complex and the scale of temporal correlation is long relative to the length of the data record. We illustrate here how a simple but physically motivated trend model can provide better-fitting and more broadly applicable trend estimates and can allow for a wider array of questions to be addressed. In particular, the model allows one to distinguish, within a single statistical framework, between uncertainties in the shorter-term vs. longer-term response to radiative forcing, with implications not only on historical trends but also on uncertainties in future projections. We also investigate the consequence on inferred uncertainties of the choice of a statistical description of internal variability. While nonparametric methods may seem to avoid making explicit assumptions, we demonstrate how even misspecified parametric statistical methods, if attuned to the

  6. Stated Preference Survey Estimating the Willingness to Pay ...

    Science.gov (United States)

    A national stated preference survey designed to elicit household willingness to pay for reductions in impinged and entrained fish at cooling water intake structures. To improve estimation of environmental benefits estimation

  7. Mean density and two-point correlation function for the CfA redshift survey slices

    International Nuclear Information System (INIS)

    De Lapparent, V.; Geller, M.J.; Huchra, J.P.

    1988-01-01

    The effect of large-scale inhomogeneities on the determination of the mean number density and the two-point spatial correlation function were investigated for two complete slices of the extension of the Center for Astrophysics (CfA) redshift survey (de Lapparent et al., 1986). It was found that the mean galaxy number density for the two strips is uncertain by 25 percent, more so than previously estimated. The large uncertainty in the mean density introduces substantial uncertainty in the determination of the two-point correlation function, particularly at large scale; thus, for the 12-deg slice of the CfA redshift survey, the amplitude of the correlation function at intermediate scales is uncertain by a factor of 2. The large uncertainties in the correlation functions might reflect the lack of a fair sample. 45 references

  8. Variable selection and estimation for longitudinal survey data

    KAUST Repository

    Wang, Li

    2014-09-01

    There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships or to identify significant variables in regression settings. This paper develops a general strategy for the model selection problem in longitudinal sample surveys. A survey weighted penalized estimating equation approach is proposed to select significant variables and estimate the coefficients simultaneously. The proposed estimators are design consistent and perform as well as the oracle procedure when the correct submodel was known. The estimating function bootstrap is applied to obtain the standard errors of the estimated parameters with good accuracy. A fast and efficient variable selection algorithm is developed to identify significant variables for complex longitudinal survey data. Simulated examples are illustrated to show the usefulness of the proposed methodology under various model settings and sampling designs. © 2014 Elsevier Inc.

  9. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  10. A NEW MODIFIED RATIO ESTIMATOR FOR ESTIMATION OF POPULATION MEAN WHEN MEDIAN OF THE AUXILIARY VARIABLE IS KNOWN

    Directory of Open Access Journals (Sweden)

    Jambulingam Subramani

    2013-10-01

    Full Text Available The present paper deals with a modified ratio estimator for estimation of population mean of the study variable when the population median of the auxiliary variable is known. The bias and mean squared error of the proposed estimator are derived and are compared with that of existing modified ratio estimators for certain known populations. Further we have also derived the conditions for which the proposed estimator performs better than the existing modified ratio estimators. From the numerical study it is also observed that the proposed modified ratio estimator performs better than the existing modified ratio estimators for certain known populations.

  11. ESTIMATION OF INSULATOR CONTAMINATIONS BY MEANS OF REMOTE SENSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    G. Han

    2016-06-01

    Full Text Available The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD, digital elevation model (DEM, land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data. Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.

  12. Estimation of Parameters in Mean-Reverting Stochastic Systems

    Directory of Open Access Journals (Sweden)

    Tianhai Tian

    2014-01-01

    Full Text Available Stochastic differential equation (SDE is a very important mathematical tool to describe complex systems in which noise plays an important role. SDE models have been widely used to study the dynamic properties of various nonlinear systems in biology, engineering, finance, and economics, as well as physical sciences. Since a SDE can generate unlimited numbers of trajectories, it is difficult to estimate model parameters based on experimental observations which may represent only one trajectory of the stochastic model. Although substantial research efforts have been made to develop effective methods, it is still a challenge to infer unknown parameters in SDE models from observations that may have large variations. Using an interest rate model as a test problem, in this work we use the Bayesian inference and Markov Chain Monte Carlo method to estimate unknown parameters in SDE models.

  13. Robust w-Estimators for Cryo-EM Class Means

    Science.gov (United States)

    Huang, Chenxi; Tagare, Hemant D.

    2016-01-01

    A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397

  14. On the mean squared error of the ridge estimator of the covariance and precision matrix

    NARCIS (Netherlands)

    van Wieringen, Wessel N.

    2017-01-01

    For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.

  15. Estimates on the mean current in a sphere of plasma

    International Nuclear Information System (INIS)

    Nunez, Manuel

    2003-01-01

    Several turbulent dynamo models predict the concentration of the magnetic field in chaotic plasmas in sheets with the field vector pointing alternatively in opposite directions, which should produce strong current sheets. It is proved that if the plasma is contained in a rigid sphere with perfectly conducting boundary the geometry of these sheets must be balanced so that the mean current remains essentially bounded by the Coulomb gauged mean vector potential of the field. This magnitude remains regular even for the sharp field variations expected in a chaotic flow. For resistive plasmas the same arguments imply that the contribution to the total current of the regions near the boundary compensates the current of the central part of the sphere

  16. Minimum Mean-Square Error Single-Channel Signal Estimation

    DEFF Research Database (Denmark)

    Beierholm, Thomas

    2008-01-01

    This topic of this thesis is MMSE signal estimation for hearing aids when only one microphone is available. The research is relevant for noise reduction systems in hearing aids. To fully benefit from the amplification provided by a hearing aid, noise reduction functionality is important as hearin...... algorithm. Although performance of the two algorithms is found comparable then the particle filter algorithm is doing a much better job tracking the noise.......-impaired persons in some noisy situations need a higher signal to noise ratio for speech to be intelligible when compared to normal-hearing persons. In this thesis two different methods to approach the MMSE signal estimation problem is examined. The methods differ in the way that models for the signal and noise...... inference is performed by particle filtering. The speech model is a time-varying auto-regressive model reparameterized by formant frequencies and bandwidths. The noise is assumed non-stationary and white. Compared to the case of using the AR coefficients directly then it is found very beneficial to perform...

  17. Estimation of undernutrition and mean calorie intake in Africa: methodology, findings and implications.

    Science.gov (United States)

    van Wesenbeeck, Cornelia F A; Keyzer, Michiel A; Nubé, Maarten

    2009-06-27

    As poverty and hunger are basic yardsticks of underdevelopment and destitution, the need for reliable statistics in this domain is self-evident. While the measurement of poverty through surveys is relatively well documented in the literature, for hunger, information is much scarcer, particularly for adults, and very different methodologies are applied for children and adults. Our paper seeks to improve on this practice in two ways. One is that we estimate the prevalence of undernutrition in sub-Saharan Africa (SSA) for both children and adults based on anthropometric data available at province or district level, and secondly, we estimate the mean calorie intake and implied calorie gap for SSA, also using anthropometric data on the same geographical aggregation level. Our main results are, first, that we find a much lower prevalence of hunger than presented in the Millennium Development reports (17.3% against 27.8% for the continent as a whole). Secondly, we find that there is much less spread in mean calorie intake across the continent than reported by the Food and Agricultural Organization (FAO) in the State of Food and Agriculture, 2007, the only estimate that covers the whole of Africa. While FAO estimates for calorie availability vary from a low of 1760 Kcal/capita/day for Central Africa to a high of 2825 Kcal/capita/day for Southern Africa, our estimates lay in a range of 2245 Kcal/capita/day (Eastern Africa) to 2618 Kcal/capita/day for Southern Africa. Thirdly, we validate the main data sources used (the Demographic and Health Surveys) by comparing them over time and with other available data sources for various countries. We conclude that the picture of Africa that emerges from anthropometric data is much less negative than that usually presented. Especially for Eastern and Central Africa, the nutritional status is less critical than commonly assumed and also mean calorie intake is higher, which implies that agricultural production and hence income must also

  18. Estimation of undernutrition and mean calorie intake in Africa: methodology, findings and implications

    Directory of Open Access Journals (Sweden)

    Nubé Maarten

    2009-06-01

    Full Text Available Abstract Background As poverty and hunger are basic yardsticks of underdevelopment and destitution, the need for reliable statistics in this domain is self-evident. While the measurement of poverty through surveys is relatively well documented in the literature, for hunger, information is much scarcer, particularly for adults, and very different methodologies are applied for children and adults. Our paper seeks to improve on this practice in two ways. One is that we estimate the prevalence of undernutrition in sub-Saharan Africa (SSA for both children and adults based on anthropometric data available at province or district level, and secondly, we estimate the mean calorie intake and implied calorie gap for SSA, also using anthropometric data on the same geographical aggregation level. Results Our main results are, first, that we find a much lower prevalence of hunger than presented in the Millennium Development reports (17.3% against 27.8% for the continent as a whole. Secondly, we find that there is much less spread in mean calorie intake across the continent than reported by the Food and Agricultural Organization (FAO in the State of Food and Agriculture, 2007, the only estimate that covers the whole of Africa. While FAO estimates for calorie availability vary from a low of 1760 Kcal/capita/day for Central Africa to a high of 2825 Kcal/capita/day for Southern Africa, our estimates lay in a range of 2245 Kcal/capita/day (Eastern Africa to 2618 Kcal/capita/day for Southern Africa. Thirdly, we validate the main data sources used (the Demographic and Health Surveys by comparing them over time and with other available data sources for various countries. Conclusion We conclude that the picture of Africa that emerges from anthropometric data is much less negative than that usually presented. Especially for Eastern and Central Africa, the nutritional status is less critical than commonly assumed and also mean calorie intake is higher, which implies

  19. How should we best estimate the mean recency duration for the BED method?

    Directory of Open Access Journals (Sweden)

    John Hargrove

    Full Text Available BED estimates of HIV incidence from cross-sectional surveys are obtained by restricting, to fixed time T, the period over which incidence is estimated. The appropriate mean recency duration (Ω(T then refers to the time where BED optical density (OD is less than a pre-set cut-off C, given the patient has been HIV positive for at most time T. Five methods, tested using data for postpartum women in Zimbabwe, provided similar estimates of Ω(T for C = 0.8: i The ratio (r/s of the number of BED-recent infections to all seroconversions over T = 365 days: 192 days [95% CI 168-216]. ii Linear mixed modeling (LMM: 191 days [95% CI 174-208]. iii Non-linear mixed modeling (NLMM: 196 days [95% CrI 188-204]. iv Survival analysis (SA: 192 days [95% CI 168-216]. Graphical analysis: 193 days. NLMM estimates of Ω(T--based on a biologically more appropriate functional relationship than LMM--resulted in best fits to OD data, the smallest variance in estimates of VT, and best correspondence between BED and follow-up estimates of HIV incidence, for the same subjects over the same time period. SA and NLMM produced very similar estimates of Ω(T but the coefficient of variation of the former was .3 times as high. The r/s method requires uniformly distributed seroconversion events but is useful if data are available only from a single follow-up. The graphical method produces the most variable results, involves unsound methodology and should not be used to provide estimates of Ω(T. False-recent rates increased as a quadratic function of C: for incidence estimation C should thus be chosen as small as possible, consistent with an adequate resultant number of recent cases, and accurate estimation of Ω(T. Inaccuracies in the estimation of Ω(T should not now provide an impediment to incidence estimation.

  20. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  1. Spectral Gap Estimates in Mean Field Spin Glasses

    Science.gov (United States)

    Ben Arous, Gérard; Jagannath, Aukosh

    2018-05-01

    We show that mixing for local, reversible dynamics of mean field spin glasses is exponentially slow in the low temperature regime. We introduce a notion of free energy barriers for the overlap, and prove that their existence imply that the spectral gap is exponentially small, and thus that mixing is exponentially slow. We then exhibit sufficient conditions on the equilibrium Gibbs measure which guarantee the existence of these barriers, using the notion of replicon eigenvalue and 2D Guerra Talagrand bounds. We show how these sufficient conditions cover large classes of Ising spin models for reversible nearest-neighbor dynamics and spherical models for Langevin dynamics. Finally, in the case of Ising spins, Panchenko's recent rigorous calculation (Panchenko in Ann Probab 46(2):865-896, 2018) of the free energy for a system of "two real replica" enables us to prove a quenched LDP for the overlap distribution, which gives us a wider criterion for slow mixing directly related to the Franz-Parisi-Virasoro approach (Franz et al. in J Phys I 2(10):1869-1880, 1992; Kurchan et al. J Phys I 3(8):1819-1838, 1993). This condition holds in a wider range of temperatures.

  2. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    Science.gov (United States)

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (nresearchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  4. Study on method of dose estimation for the Dual-moderated neutron survey meter

    International Nuclear Information System (INIS)

    Zhou, Bo; Li, Taosheng; Xu, Yuhai; Gong, Cunkui; Yan, Qiang; Li, Lei

    2013-01-01

    In order to study neutron dose measurement in high energy radiation field, a Dual-moderated survey meter in the range from 1 keV to 300 MeV mean energies spectra has been developed. Measurement results of some survey meters depend on the neutron spectra characteristics in different neutron radiation fields, so the characteristics of the responses to various neutron spectra should be studied in order to get more reasonable dose. In this paper the responses of the survey meter were calculated under different neutron spectra data from IAEA of Technical Reports Series No. 318 and other references. Finally one dose estimation method was determined. The range of the reading per H*(10) for the method estimated is about 0.7–1.6 for the neutron mean energy range from 50 keV to 300 MeV. -- Highlights: • We studied a novel high energy neutron survey meter. • Response characteristics of the survey meter were calculated by using a series of neutron spectra. • One significant advantage of the survey meter is that it can provide mean energy of radiation field. • Dose estimate deviation can be corrected. • The range of corrected reading per H*(10) is about 0.7–1.6 for the neutron fluence mean energy range from 0.05 MeV to 300 MeV

  5. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights.

    Directory of Open Access Journals (Sweden)

    Patrick Habecker

    Full Text Available Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations via traditional survey tools such as telephone or mail surveys--by asking a representative sample to estimate the number of people they know who are members of such a "hidden" subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation "trimming" to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.

  6. Distance estimation experiment for aerial minke whale surveys

    Directory of Open Access Journals (Sweden)

    Lars Witting

    2009-09-01

    Full Text Available A comparative study between aerial cue–counting and digital photography surveys for minke whales conducted in Faxaflói Bay in September 2003 is used to check the perpendicular distances estimated by the cue-counting observers. The study involved 2 aircraft with the photo plane at 1,700 feet flying above the cue–counting plane at 750 feet. The observer–based distance estimates were calculated from head angles estimated by angle-boards and declination angles estimated by declinometers. These distances were checked against image–based estimates of the perpendicular distance to the same whale. The 2 independent distance estimates were obtained for 21 sightings of minke whale, and there was a good agreement between the 2 types of estimates. The relative absolute deviations between the 2 estimates were on average 23% (se: 6%, with the errors in the observer–based distance estimates resembling that of a log-normal distribution. The linear regression of the observer–based estimates (obs on the image–based estimates (img was Obs=1.1Img (R2=0.85 with an intercept fixed at zero. There was no evidence of a distance estimation bias that could generate a positive bias in the absolute abundance estimated by cue–counting.

  7. DEEBAR - A BASIC interactive computer programme for estimating mean resonance spacings

    International Nuclear Information System (INIS)

    Booth, M.; Pope, A.L.; Smith, R.W.; Story, J.S.

    1988-02-01

    DEEBAR is a BASIC interactive programme, which uses the theories of Dyson and of Dyson and Mehta, to compute estimates of the mean resonance spacings and associated uncertainty statistics from an input file of neutron resonance energies. In applying these theories the broad scale energy dependence of D-bar, as predicted by the ordinary theory of level densities, is taken into account. The mean spacing D-bar ± δD-bar, referred to zero energy of the incident neutrons, is computed from the energies of the first k resonances, for k = 2,3...K in turn and as if no resonances are missing. The user is asked to survey this set of D-bar and δD-bar values and to form a judgement - up to what value of k is the set of resonances complete and what value, in consequence, does the user adopt as the preferred value of D-bar? When the preferred values for k and D-bar have been input, the programme calculates revised values for the level density parameters, consistent with this value for D-bar and with other input information. Two short tables are printed, illustrating the energy variation and spin dependence of D-bar. Dyson's formula based on his Coulomb gas analogy is used for estimating the most likely energies of the topmost bound levels. Finally the quasi-crystalline character of a single level series is exploited by means of a table in which the resonance energies are set alongside an energy ladder whose rungs are regularly spaced with spacing D-bar(E); this comparative table expedites the search for gaps where resonances may have been missed experimentally. Used in conjunction with the program LJPROB, which calculates neutron strengths and compares them against the expected Porter Thomas distribution, estimates of the statistical parameters for use in the unresolved resonance region may be derived. (author)

  8. A Survey on Operator Monotonicity, Operator Convexity, and Operator Means

    Directory of Open Access Journals (Sweden)

    Pattrawut Chansangiam

    2015-01-01

    Full Text Available This paper is an expository devoted to an important class of real-valued functions introduced by Löwner, namely, operator monotone functions. This concept is closely related to operator convex/concave functions. Various characterizations for such functions are given from the viewpoint of differential analysis in terms of matrix of divided differences. From the viewpoint of operator inequalities, various characterizations and the relationship between operator monotonicity and operator convexity are given by Hansen and Pedersen. In the viewpoint of measure theory, operator monotone functions on the nonnegative reals admit meaningful integral representations with respect to Borel measures on the unit interval. Furthermore, Kubo-Ando theory asserts the correspondence between operator monotone functions and operator means.

  9. Comparative Study of Complex Survey Estimation Software in ONS

    Directory of Open Access Journals (Sweden)

    Andy Fallows

    2015-09-01

    Full Text Available Many official statistics across the UK Government Statistical Service (GSS are produced using data collected from sample surveys. These survey data are used to estimate population statistics through weighting and calibration techniques. For surveys with complex or unusual sample designs, the weighting can be fairly complicated. Even in more simple cases, appropriate software is required to implement survey weighting and estimation. As with other stages of the survey process, it is preferable to use a standard, generic calibration tool wherever possible. Standard tools allow for efficient use of resources and assist with the harmonisation of methods. In the case of calibration, the Office for National Statistics (ONS has experience of using the Statistics Canada Generalized Estimation System (GES across a range of business and social surveys. GES is a SAS-based system and so is only available in conjunction with an appropriate SAS licence. Given recent initiatives and encouragement to investigate open source solutions across government, it is appropriate to determine whether there are any open source calibration tools available that can provide the same service as GES. This study compares the use of GES with the calibration tool ‘R evolved Generalized software for sampling estimates and errors in surveys’ (ReGenesees available in R, an open source statistical programming language which is beginning to be used in many statistical offices. ReGenesees is a free R package which has been developed by the Italian statistics office (Istat and includes functionality to calibrate survey estimates using similar techniques to GES. This report describes analysis of the performance of ReGenesees in comparison to GES to calibrate a representative selection of ONS surveys. Section 1.1 provides a brief introduction to the current use of SAS and R in ONS. Section 2 describes GES and ReGenesees in more detail. Sections 3.1 and 3.2 consider methods for

  10. The Design Model of Multilevel Estimation Means for Students’ Competence Assessment at Technical Higher School

    Directory of Open Access Journals (Sweden)

    O. F. Shikhova

    2012-01-01

    Full Text Available The paper considers the research findings aimed at the developing the new quality testing technique for students assessment at Technical Higher School. The model of multilevel estimation means is provided for diagnosing the level of general cultural and professional competences of students doing a bachelor degree in technological fields. The model implies the integrative character of specialists training - the combination of both the psycho-pedagogic (invariable and engineering (variable components, as well as the qualimetric approach substantiating the system of students competence estimation and providing the most adequate assessment means. The principles of designing the multilevel estimation means are defined along with the methodology approaches to their implementation. For the reasonable selection of estimation means, the system of quality criteria is proposed by the authors, being based on the group expert assessment. The research findings can be used for designing the competence-oriented estimation means

  11. Estimating recreational harvest using interview-based recall survey: Implication of recalling in weight or numbers

    DEFF Research Database (Denmark)

    Sparrevohn, Claus Reedtz

    2013-01-01

    on interviewed-based surveys where fishers are asked to recall harvest within a given timeframe. However, the importance of whether fishers are requested to provide figures in weight or number is unresolved. Therefore, a recall survey aiming at estimating recreational harvest was designed, such that respondents...... could report harvest using either weight or numbers. It was found that: (1) a preference for reporting in numbers dominated; (2) reported mean individual weight of fish caught, differed between units preferences; and (3) when an estimate of total harvest in weight are calculated, these difference could...

  12. Estimating Mean and Variance Through Quantiles : An Experimental Comparison of Different Methods

    NARCIS (Netherlands)

    Moors, J.J.A.; Strijbosch, L.W.G.; van Groenendaal, W.J.H.

    2002-01-01

    If estimates of mean and variance are needed and only experts' opinions are available, the literature agrees that it is wise behaviour to ask only for their (subjective) estimates of quantiles: from these, estimates of the desired parameters are calculated.Quite a number of methods have been

  13. Mean total arsenic concentrations in chicken 1989-2000 and estimated exposures for consumers of chicken.

    OpenAIRE

    Lasky, Tamar; Sun, Wenyu; Kadry, Abdel; Hoffman, Michael K

    2004-01-01

    The purpose of this study was to estimate mean concentrations of total arsenic in chicken liver tissue and then estimate total and inorganic arsenic ingested by humans through chicken consumption. We used national monitoring data from the Food Safety and Inspection Service National Residue Program to estimate mean arsenic concentrations for 1994-2000. Incorporating assumptions about the concentrations of arsenic in liver and muscle tissues as well as the proportions of inorganic and organic a...

  14. A global mean ocean circulation estimation using goce gravity models - the DTU12MDT mean dynamic topography model

    DEFF Research Database (Denmark)

    Knudsen, Per; Andersen, Ole Baltazar

    2012-01-01

    The Gravity and Ocean Circulation Experiment - GOCE satellite mission measure the Earth gravity field with unprecedented accuracy leading to substantial improvements in the modelling of the ocean circulation and transport. In this study of the performance of GOCE, a newer gravity model have been...... combined with the DTU10MSS mean sea surface model to construct a global mean dynamic topography model named DTU10MDT. The results of preliminary analyses using preliminary GOCE gravity models clearly demonstrated the potential of GOCE mission. Both the resolution and the estimation of the surface currents...... have been improved significantly compared to results obtained using pre-GOCE gravity field models. The results of this study show that geostrophic surface currents associated with the mean circulation have been further improved and that currents having speeds down to 5 cm/s have been recovered....

  15. New aerial survey and hierarchical model to estimate manatee abundance

    Science.gov (United States)

    Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.

    2011-01-01

    Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability

  16. Estimation of unaltered daily mean streamflow at ungaged streams of New York, excluding Long Island, water years 1961-2010

    Science.gov (United States)

    Gazoorian, Christopher L.

    2015-01-01

    The lakes, rivers, and streams of New York State provide an essential water resource for the State. The information provided by time series hydrologic data is essential to understanding ways to promote healthy instream ecology and to strengthen the scientific basis for sound water management decision making in New York. The U.S. Geological Survey, in cooperation with The Nature Conservancy and the New York State Energy Research and Development Authority, has developed the New York Streamflow Estimation Tool to estimate a daily mean hydrograph for the period from October 1, 1960, to September 30, 2010, at ungaged locations across the State. The New York Streamflow Estimation Tool produces a complete estimated daily mean time series from which daily flow statistics can be estimated. In addition, the New York Streamflow Estimation Tool provides a means for quantitative flow assessments at ungaged locations that can be used to address the objectives of the Clean Water Act—to restore and maintain the chemical, physical, and biological integrity of the Nation’s waters.

  17. Population-based absolute risk estimation with survey data

    Science.gov (United States)

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  18. Mean size estimation yields left-side bias: Role of attention on perceptual averaging.

    Science.gov (United States)

    Li, Kuei-An; Yeh, Su-Ling

    2017-11-01

    The human visual system can estimate mean size of a set of items effectively; however, little is known about whether information on each visual field contributes equally to the mean size estimation. In this study, we examined whether a left-side bias (LSB)-perceptual judgment tends to depend more heavily on left visual field's inputs-affects mean size estimation. Participants were instructed to estimate the mean size of 16 spots. In half of the trials, the mean size of the spots on the left side was larger than that on the right side (the left-larger condition) and vice versa (the right-larger condition). Our results illustrated an LSB: A larger estimated mean size was found in the left-larger condition than in the right-larger condition (Experiment 1), and the LSB vanished when participants' attention was effectively cued to the right side (Experiment 2b). Furthermore, the magnitude of LSB increased with stimulus-onset asynchrony (SOA), when spots on the left side were presented earlier than the right side. In contrast, the LSB vanished and then induced a reversed effect with SOA when spots on the right side were presented earlier (Experiment 3). This study offers the first piece of evidence suggesting that LSB does have a significant influence on mean size estimation of a group of items, which is induced by a leftward attentional bias that enhances the prior entry effect on the left side.

  19. A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions

    Science.gov (United States)

    Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver

    2016-01-01

    Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities

  20. Estimation of mean grain size of seafloor sediments using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    De, C.; Chakraborty, B.

    The feasibility of an artificial neural network based approach is investigated to estimate the values of mean grain size of seafloor sediments using four dominant echo features, extracted from acoustic backscatter data. The acoustic backscatter data...

  1. Five Year Mean Surface Chlorophyll Estimates in the Northern Gulf of Mexico for 2005 through 2009

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These images were created by combining the mean surface chlorophyll estimates to produce seasonal representations for winter, spring, summer and fall. Winter...

  2. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  3. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...... to the larger malignant nuclei. Finally, the variance in the volume distribution of nuclear volume is estimated by shape-independent estimates of the volume-weighted second moment of the nuclear volume, vv2, using both a manual and a computer-assisted approach. The working procedure for the description of 3-D...

  4. Estimating trends in alligator populations from nightlight survey data

    Science.gov (United States)

    Fujisaki, Ikuko; Mazzotti, Frank J.; Dorazio, Robert M.; Rice, Kenneth G.; Cherkiss, Michael; Jeffery, Brian

    2011-01-01

    Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.

  5. Estimation of average causal effect using the restricted mean residual lifetime as effect measure

    DEFF Research Database (Denmark)

    Mansourvar, Zahra; Martinussen, Torben

    2017-01-01

    with respect to their survival times. In observational studies where the factor of interest is not randomized, covariate adjustment is needed to take into account imbalances in confounding factors. In this article, we develop an estimator for the average causal treatment difference using the restricted mean...... residual lifetime as target parameter. We account for confounding factors using the Aalen additive hazards model. Large sample property of the proposed estimator is established and simulation studies are conducted in order to assess small sample performance of the resulting estimator. The method is also......Although mean residual lifetime is often of interest in biomedical studies, restricted mean residual lifetime must be considered in order to accommodate censoring. Differences in the restricted mean residual lifetime can be used as an appropriate quantity for comparing different treatment groups...

  6. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  7. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...... of volume, a detailed investigation of nuclear size variability is possible. Benign and malignant nuclear populations show approximately the same relative variability with regard to nuclear volume, and the presented data are compatible with a simple size transformation from the smaller benign nuclei...

  8. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

    Directory of Open Access Journals (Sweden)

    David Shilane

    2013-01-01

    Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

  9. Age synthesis and estimation via faces: a survey.

    Science.gov (United States)

    Fu, Yun; Guo, Guodong; Huang, Thomas S

    2010-11-01

    Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.

  10. A technical survey on tire-road friction estimation

    Institute of Scientific and Technical Information of China (English)

    Seyedmeysam KHALEGHIAN; Anahita EMAMI; Saied TAHERI

    2017-01-01

    Lack of driver's knowledge about the abrupt changes in pavement's friction and poor performance of the vehicle's stability,traction,and ABS controllers on the low friction surfaces are the most important factors affecting car crashes.Due to its direct relation to vehicle stability,accurate estimation of tire-road friction is of interest to all vehicle and tire companies.Many studies have been conducted in this field and researchers have used different tools and have proposed different algorithms.This literature survey introduces different approaches,which have been widely used to estimate the friction or other related parameters,and covers the recent literature that contains these methodologies.The emphasize of this review paper is on the algorithms and studies,which are more popular and have been repeated several times.The focus has been divided into two main groups:experiment-based and model-based approaches.Each of these main groups has several sub-categories,which are explained in the next few sections.Several summary tables are provided in which the overall feature of each approach is reviewed that gives the reader the general picture of different algorithms,which are widely used in friction estimation studies.

  11. Estimation of Areal Mean Rainfall in Remote Areas Using B-SHADE Model

    Directory of Open Access Journals (Sweden)

    Tao Zhang

    2016-01-01

    Full Text Available This study presented a method to estimate areal mean rainfall (AMR using a Biased Sentinel Hospital Based Area Disease Estimation (B-SHADE model, together with biased rain gauge observations and Tropical Rainfall Measuring Mission (TRMM data, for remote areas with a sparse and uneven distribution of rain gauges. Based on the B-SHADE model, the best linear unbiased estimation of AMR could be obtained. A case study was conducted for the Three-River Headwaters region in the Tibetan Plateau of China, and its performance was compared with traditional methods. The results indicated that B-SHADE obtained the least estimation biases, with a mean error and root mean square error of −0.63 and 3.48 mm, respectively. For the traditional methods including arithmetic average, Thiessen polygon, and ordinary kriging, the mean errors were 7.11, −1.43, and 2.89 mm, which were up to 1027.1%, 127.0%, and 358.3%, respectively, greater than for the B-SHADE model. The root mean square errors were 10.31, 4.02, and 6.27 mm, which were up to 196.1%, 15.5%, and 80.0%, respectively, higher than for the B-SHADE model. The proposed technique can be used to extend the AMR record to the presatellite observation period, when only the gauge data are available.

  12. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  13. Testing a statistical method of global mean palotemperature estimations in a long climate simulation

    Energy Technology Data Exchange (ETDEWEB)

    Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2001-07-01

    Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)

  14. New a priori estimates for mean-field games with congestion

    KAUST Repository

    Evangelista, David; Gomes, Diogo A.

    2016-01-01

    We present recent developments in crowd dynamics models (e.g. pedestrian flow problems). Our formulation is given by a mean-field game (MFG) with congestion. We start by reviewing earlier models and results. Next, we develop our model. We establish new a priori estimates that give partial regularity of the solutions. Finally, we discuss numerical results.

  15. Comparisons of Means for Estimating Sea States from an Advancing Large Container Ship

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Andersen, Ingrid Marie Vincent; Koning, Jos

    2013-01-01

    to ship-wave interactions in a seaway. In the paper, sea state estimates are produced by three means: the wave buoy analogy, relying on shipboard response measurements, a wave radar system, and a system providing the instantaneous wave height. The presented results show that for the given data, recorded...

  16. New a priori estimates for mean-field games with congestion

    KAUST Repository

    Evangelista, David

    2016-01-06

    We present recent developments in crowd dynamics models (e.g. pedestrian flow problems). Our formulation is given by a mean-field game (MFG) with congestion. We start by reviewing earlier models and results. Next, we develop our model. We establish new a priori estimates that give partial regularity of the solutions. Finally, we discuss numerical results.

  17. Meaning

    Science.gov (United States)

    Harteveld, Casper

    The second world to be considered concerns Meaning. In contrast to Reality and Play, this world relates to the people, disciplines, and domains that are focused on creating a certain value. For example, if this value is about providing students knowledge about physics, it involves teachers, the learning sciences, and the domains education and physics. This level goes into the aspects and criteria that designers need to take into account from this perspective. The first aspect seems obvious when we talk of “games with a serious purpose.” They have a purpose and this needs to be elaborated on, for example in terms of what “learning objectives” it attempts to achieve. The subsequent aspect is not about what is being pursued but how. To attain a value, designers have to think about a strategy that they employ. In my case this concerned looking at the learning paradigms that have come into existence in the past century and see what they have to tell us about learning. This way, their principles can be translated into a game environment. This translation involves making the strategy concrete. Or, in other words, operationalizing the plan. This is the third aspect. In this level, I will further specifically explain how I derived requirements from each of the learning paradigms, like reflection and exploration, and how they can possibly be related to games. The fourth and final aspect is the context in which the game is going to be used. It matters who uses the game and when, where, and how the game is going to be used. When designers have looked at these aspects, they have developed a “value proposal” and the worth of it may be judged by criteria, like motivation, relevance, and transfer. But before I get to this, I first go into how we human beings are meaning creators and what role assumptions, knowledge, and ambiguity have in this. I will illustrate this with some silly jokes about doctors and Mickey Mouse, and with an illusion.

  18. Estimation of monthly-mean daily global solar radiation based on MODIS and TRMM products

    International Nuclear Information System (INIS)

    Qin, Jun; Chen, Zhuoqi; Yang, Kun; Liang, Shunlin; Tang, Wenjun

    2011-01-01

    Global solar radiation (GSR) is required in a large number of fields. Many parameterization schemes are developed to estimate it using routinely measured meteorological variables, since GSR is directly measured at a limited number of stations. Even so, meteorological stations are sparse, especially, in remote areas. Satellite signals (radiance at the top of atmosphere in most cases) can be used to estimate continuous GSR in space. However, many existing remote sensing products have a relatively coarse spatial resolution and these inversion algorithms are too complicated to be mastered by experts in other research fields. In this study, the artificial neural network (ANN) is utilized to build the mathematical relationship between measured monthly-mean daily GSR and several high-level remote sensing products available for the public, including Moderate Resolution Imaging Spectroradiometer (MODIS) monthly averaged land surface temperature (LST), the number of days in which the LST retrieval is performed in 1 month, MODIS enhanced vegetation index, Tropical Rainfall Measuring Mission satellite (TRMM) monthly precipitation. After training, GSR estimates from this ANN are verified against ground measurements at 12 radiation stations. Then, comparisons are performed among three GSR estimates, including the one presented in this study, a surface data-based estimate, and a remote sensing product by Japan Aerospace Exploration Agency (JAXA). Validation results indicate that the ANN-based method presented in this study can estimate monthly-mean daily GSR at a spatial resolution of about 5 km with high accuracy.

  19. Mean atmospheric temperature model estimation for GNSS meteorology using AIRS and AMSU data

    Directory of Open Access Journals (Sweden)

    Rata Suwantong

    2017-03-01

    Full Text Available In this paper, the problem of modeling the relationship between the mean atmospheric and air surface temperatures is addressed. Particularly, the major goal is to estimate the model parameters at a regional scale in Thailand. To formulate the relationship between the mean atmospheric and air surface temperatures, a triply modulated cosine function was adopted to model the surface temperature as a periodic function. The surface temperature was then converted to mean atmospheric temperature using a linear function. The parameters of the model were estimated using an extended Kalman filter. Traditionally, radiosonde data is used. In this paper, satellite data from an atmospheric infrared sounder, and advanced microwave sounding unit sensors was used because it is open source data and has global coverage with high temporal resolution. The performance of the proposed model was tested against that of a global model via an accuracy assessment of the computed GNSS-derived PWV.

  20. A Class of Estimators for Finite Population Mean in Double Sampling under Nonresponse Using Fractional Raw Moments

    Directory of Open Access Journals (Sweden)

    Manzoor Khan

    2014-01-01

    Full Text Available This paper presents new classes of estimators in estimating the finite population mean under double sampling in the presence of nonresponse when using information on fractional raw moments. The expressions for mean square error of the proposed classes of estimators are derived up to the first degree of approximation. It is shown that a proposed class of estimators performs better than the usual mean estimator, ratio type estimators, and Singh and Kumar (2009 estimator. An empirical study is carried out to demonstrate the performance of a proposed class of estimators.

  1. Faculty Prayer in Catholic Schools: A Survey of Practices and Meaning

    Science.gov (United States)

    Mayotte, Gail

    2010-01-01

    This article presents a research study that utilized a web-based survey to gather data about the communal prayer experiences of faculty members in Catholic elementary and secondary schools in the United States and the meaning that such prayer holds to its participants. Key findings show that faculty prayer experiences take place readily, though…

  2. AUTOMATED UNSUPERVISED CLASSIFICATION OF THE SLOAN DIGITAL SKY SURVEY STELLAR SPECTRA USING k-MEANS CLUSTERING

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Almeida, J.; Allende Prieto, C., E-mail: jos@iac.es, E-mail: callende@iac.es [Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife (Spain)

    2013-01-20

    Large spectroscopic surveys require automated methods of analysis. This paper explores the use of k-means clustering as a tool for automated unsupervised classification of massive stellar spectral catalogs. The classification criteria are defined by the data and the algorithm, with no prior physical framework. We work with a representative set of stellar spectra associated with the Sloan Digital Sky Survey (SDSS) SEGUE and SEGUE-2 programs, which consists of 173,390 spectra from 3800 to 9200 A sampled on 3849 wavelengths. We classify the original spectra as well as the spectra with the continuum removed. The second set only contains spectral lines, and it is less dependent on uncertainties of the flux calibration. The classification of the spectra with continuum renders 16 major classes. Roughly speaking, stars are split according to their colors, with enough finesse to distinguish dwarfs from giants of the same effective temperature, but with difficulties to separate stars with different metallicities. There are classes corresponding to particular MK types, intrinsically blue stars, dust-reddened, stellar systems, and also classes collecting faulty spectra. Overall, there is no one-to-one correspondence between the classes we derive and the MK types. The classification of spectra without continuum renders 13 classes, the color separation is not so sharp, but it distinguishes stars of the same effective temperature and different metallicities. Some classes thus obtained present a fairly small range of physical parameters (200 K in effective temperature, 0.25 dex in surface gravity, and 0.35 dex in metallicity), so that the classification can be used to estimate the main physical parameters of some stars at a minimum computational cost. We also analyze the outliers of the classification. Most of them turn out to be failures of the reduction pipeline, but there are also high redshift QSOs, multiple stellar systems, dust-reddened stars, galaxies, and, finally, odd

  3. Methods for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma

    Science.gov (United States)

    Esralew, Rachel A.; Smith, S. Jerrod

    2010-01-01

    Flow statistics can be used to provide decision makers with surface-water information needed for activities such as water-supply permitting, flow regulation, and other water rights issues. Flow statistics could be needed at any location along a stream. Most often, streamflow statistics are needed at ungaged sites, where no flow data are available to compute the statistics. Methods are presented in this report for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma. Flow statistics included the (1) annual (period of record), (2) seasonal (summer-autumn and winter-spring), and (3) 12 monthly duration statistics, including the 20th, 50th, 80th, 90th, and 95th percentile flow exceedances, and the annual mean-flow (mean of daily flows for the period of record). Flow statistics were calculated from daily streamflow information collected from 235 streamflow-gaging stations throughout Oklahoma and areas in adjacent states. A drainage-area ratio method is the preferred method for estimating flow statistics at an ungaged location that is on a stream near a gage. The method generally is reliable only if the drainage-area ratio of the two sites is between 0.5 and 1.5. Regression equations that relate flow statistics to drainage-basin characteristics were developed for the purpose of estimating selected flow-duration and annual mean-flow statistics for ungaged streams that are not near gaging stations on the same stream. Regression equations were developed from flow statistics and drainage-basin characteristics for 113 unregulated gaging stations. Separate regression equations were developed by using U.S. Geological Survey streamflow-gaging stations in regions with similar drainage-basin characteristics. These equations can increase the accuracy of regression equations used for estimating flow-duration and annual mean-flow statistics at ungaged stream locations in Oklahoma. Streamflow-gaging stations were grouped by selected drainage

  4. OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE.

    Science.gov (United States)

    Xie, Xianchao; Kou, S C; Brown, Lawrence

    2016-03-01

    This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results.

  5. Estimating mountain basin-mean precipitation from streamflow using Bayesian inference

    Science.gov (United States)

    Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Lundquist, Jessica D.

    2015-10-01

    Estimating basin-mean precipitation in complex terrain is difficult due to uncertainty in the topographical representativeness of precipitation gauges relative to the basin. To address this issue, we use Bayesian methodology coupled with a multimodel framework to infer basin-mean precipitation from streamflow observations, and we apply this approach to snow-dominated basins in the Sierra Nevada of California. Using streamflow observations, forcing data from lower-elevation stations, the Bayesian Total Error Analysis (BATEA) methodology and the Framework for Understanding Structural Errors (FUSE), we infer basin-mean precipitation, and compare it to basin-mean precipitation estimated using topographically informed interpolation from gauges (PRISM, the Parameter-elevation Regression on Independent Slopes Model). The BATEA-inferred spatial patterns of precipitation show agreement with PRISM in terms of the rank of basins from wet to dry but differ in absolute values. In some of the basins, these differences may reflect biases in PRISM, because some implied PRISM runoff ratios may be inconsistent with the regional climate. We also infer annual time series of basin precipitation using a two-step calibration approach. Assessment of the precision and robustness of the BATEA approach suggests that uncertainty in the BATEA-inferred precipitation is primarily related to uncertainties in hydrologic model structure. Despite these limitations, time series of inferred annual precipitation under different model and parameter assumptions are strongly correlated with one another, suggesting that this approach is capable of resolving year-to-year variability in basin-mean precipitation.

  6. An algorithm for the estimation of road traffic space mean speeds from double loop detector data

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Diaz, M.; Perez Perez, I.

    2016-07-01

    Most algorithms trying to analyze or forecast road traffic rely on many inputs, but in practice, calculations are usually limited by the available data and measurement equipment. Generally, some of these inputs are substituted by raw or even inappropriate estimations, which in some cases come into conflict with the fundamentals of traffic flow theory. This paper refers to one common example of these bad practices. Many traffic management centres depend on the data provided by double loop detectors, which supply, among others, vehicle speeds. The common data treatment is to compute the arithmetic mean of these speeds over different aggregation periods (i.e. the time mean speeds). Time mean speed is not consistent with Edie’s generalized definitions of traffic variables, and therefore it is not the average speed which relates flow to density. This means that current practice begins with an error that can have negative effects in later studies and applications. The algorithm introduced in this paper enables easily the estimation of space mean speeds from the data provided by the loops. It is based on two key hypotheses: stationarity of traffic and log-normal distribution of the individual speeds in each time interval of aggregation. It could also be used in case of transient traffic as a part of any data fusion methodology. (Author)

  7. Testing Black Market vs. Official PPP: A Pooled Mean Group Estimation Approach

    OpenAIRE

    Goswami, Gour Gobinda; Hossain, Mohammad Zariab

    2013-01-01

    Testing purchasing power parity (PPP) using black market exchange rate data has gained popularity in recent times. It is claimed that black market exchange rate data more often support the PPP than the official exchange rate data. In this study, to assess both the long run stability of exchange rate and the short run dynamics, we employ Pooled Mean Group (PMG) Estimation developed by Pesaran et al. (1999) on eight groups of countries based on different criteria. Using the famous Reinhart and ...

  8. Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2015-01-01

    In this work we consider the problem of feature enhancement for noise-robust automatic speech recognition (ASR). We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features, which is based on a minimum number of well-established, theoretically consistent......-of-the-art MFCC feature enhancement algorithms within this class of algorithms, while theoretically suboptimal or based on theoretically inconsistent assumptions, perform close to optimally in the MMSE sense....

  9. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  10. Estimation of mean and median pO2 values for a composite EPR spectrum.

    Science.gov (United States)

    Ahmad, Rizwan; Vikram, Deepti S; Potter, Lee C; Kuppusamy, Periannan

    2008-06-01

    Electron paramagnetic resonance (EPR)-based oximetry is capable of quantifying oxygen content in samples. However, for a heterogeneous environment with multiple pO2 values, peak-to-peak linewidth of the composite EPR lineshape does not provide a reliable estimate of the overall pO2 in the sample. The estimate, depending on the heterogeneity, can be severely biased towards narrow components. To address this issue, we suggest a postprocessing method to recover the linewidth histogram which can be used in estimating meaningful parameters, such as the mean and median pO2 values. This information, although not as comprehensive as obtained by EPR spectral-spatial imaging, goes beyond what can be generally achieved with conventional EPR spectroscopy. Substantially shorter acquisition times, in comparison to EPR imaging, may prompt its use in clinically relevant models. For validation, simulation and EPR experiment data are presented.

  11. An Improved Weise’s Rule for Efficient Estimation of Stand Quadratic Mean Diameter

    Directory of Open Access Journals (Sweden)

    Róbert Sedmák

    2015-07-01

    Full Text Available The main objective of this study was to explore the accuracy of Weise’s rule of thumb applied to an estimation of the quadratic mean diameter of a forest stand. Virtual stands of European beech (Fagus sylvatica L. across a range of structure types were stochastically generated and random sampling was simulated. We compared the bias and accuracy of stand quadratic mean diameter estimates, employing different ranks of measured stems from a set of the 10 trees nearest to the sampling point. We proposed several modifications of the original Weise’s rule based on the measurement and averaging of two different ranks centered to a target rank. In accordance with the original formulation of the empirical rule, we recommend the application of the measurement of the 6th stem in rank corresponding to the 55% sample percentile of diameter distribution, irrespective of mean diameter size and degree of diameter dispersion. The study also revealed that the application of appropriate two-measurement modifications of Weise’s method, the 4th and 8th ranks or 3rd and 9th ranks averaged to the 6th central rank, should be preferred over the classic one-measurement estimation. The modified versions are characterised by an improved accuracy (about 25% without statistically significant bias and measurement costs comparable to the classic Weise method.

  12. Estimating mean change in population salt intake using spot urine samples.

    Science.gov (United States)

    Petersen, Kristina S; Wu, Jason H Y; Webster, Jacqui; Grimes, Carley; Woodward, Mark; Nowson, Caryl A; Neal, Bruce

    2017-10-01

    Spot urine samples are easier to collect than 24-h urine samples and have been used with estimating equations to derive the mean daily salt intake of a population. Whether equations using data from spot urine samples can also be used to estimate change in mean daily population salt intake over time is unknown. We compared estimates of change in mean daily population salt intake based upon 24-h urine collections with estimates derived using equations based on spot urine samples. Paired and unpaired 24-h urine samples and spot urine samples were collected from individuals in two Australian populations, in 2011 and 2014. Estimates of change in daily mean population salt intake between 2011 and 2014 were obtained directly from the 24-h urine samples and by applying established estimating equations (Kawasaki, Tanaka, Mage, Toft, INTERSALT) to the data from spot urine samples. Differences between 2011 and 2014 were calculated using mixed models. A total of 1000 participants provided a 24-h urine sample and a spot urine sample in 2011, and 1012 did so in 2014 (paired samples n = 870; unpaired samples n = 1142). The participants were community-dwelling individuals living in the State of Victoria or the town of Lithgow in the State of New South Wales, Australia, with a mean age of 55 years in 2011. The mean (95% confidence interval) difference in population salt intake between 2011 and 2014 determined from the 24-h urine samples was -0.48g/day (-0.74 to -0.21; P spot urine samples was -0.24 g/day (-0.42 to -0.06; P = 0.01) using the Tanaka equation, -0.42 g/day (-0.70 to -0.13; p = 0.004) using the Kawasaki equation, -0.51 g/day (-1.00 to -0.01; P = 0.046) using the Mage equation, -0.26 g/day (-0.42 to -0.10; P = 0.001) using the Toft equation, -0.20 g/day (-0.32 to -0.09; P = 0.001) using the INTERSALT equation and -0.27 g/day (-0.39 to -0.15; P  0.058). Separate analysis of the unpaired and paired data showed that detection of

  13. Assessment of sampling strategies for estimation of site mean concentrations of stormwater pollutants.

    Science.gov (United States)

    McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana

    2018-02-01

    The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2  = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown

  14. American Community Survey (ACS) 5-Year Estimates for Coastal Geographies

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The American Community Survey (ACS) is an ongoing statistical survey that samples a small percentage of the population every year. These data have been apportioned...

  15. Variable selection and estimation for longitudinal survey data

    KAUST Repository

    Wang, Li; Wang, Suojin; Wang, Guannan

    2014-01-01

    There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships

  16. More recent robust methods for the estimation of mean and standard deviation of data

    International Nuclear Information System (INIS)

    Kanisch, G.

    2003-01-01

    Outliers in a data set result in biased values of mean and standard deviation. One way to improve the estimation of a mean is to apply tests to identify outliers and to exclude them from the calculations. Tests according to Grubbs or to Dixon, which are frequently used in practice, especially within laboratory intercomparisons, are not very efficient in identifying outliers. Since more than ten years now so-called robust methods are used more and more, which determine mean and standard deviation by iteration and down-weighting values far from the mean, thereby diminishing the impact of outliers. In 1989 the Analytical Methods Committee of the British Royal Chemical Society published such a robust method. Since 1993 the US Environmental Protection Agency published a more efficient and quite versatile method. Mean and standard deviation are calculated by iteration and application of a special weight function for down-weighting outlier candidates. In 2000, W. Cofino et al. published a very efficient robust method which works quite different from the others. It applies methods taken from the basics of quantum mechanics, such as ''wave functions'' associated with each laboratory mean value and matrix algebra (solving eigenvalue problems). In contrast to the other ones, this method includes the individual measurement uncertainties. (orig.)

  17. Estimating the Spatial Distribution of Groundwater Age Using Synoptic Surveys of Environmental Tracers in Streams

    Science.gov (United States)

    Gardner, W. P.

    2017-12-01

    A model which simulates tracer concentration in surface water as a function the age distribution of groundwater discharge is used to characterize groundwater flow systems at a variety of spatial scales. We develop the theory behind the model and demonstrate its application in several groundwater systems of local to regional scale. A 1-D stream transport model, which includes: advection, dispersion, gas exchange, first-order decay and groundwater inflow is coupled a lumped parameter model that calculates the concentration of environmental tracers in discharging groundwater as a function of the groundwater residence time distribution. The lumped parameters, which describe the residence time distribution, are allowed to vary spatially, and multiple environmental tracers can be simulated. This model allows us to calculate the longitudinal profile of tracer concentration in streams as a function of the spatially variable groundwater age distribution. By fitting model results to observations of stream chemistry and discharge, we can then estimate the spatial distribution of groundwater age. The volume of groundwater discharge to streams can be estimated using a subset of environmental tracers, applied tracers, synoptic stream gauging or other methods, and the age of groundwater then estimated using the previously calculated groundwater discharge and observed environmental tracer concentrations. Synoptic surveys of SF6, CFC's, 3H and 222Rn, along with measured stream discharge are used to estimate the groundwater inflow distribution and mean age for regional scale surveys of the Berland River in west-central Alberta. We find that groundwater entering the Berland has observable age, and that the age estimated using our stream survey is of similar order to limited samples from groundwater wells in the region. Our results show that the stream can be used as an easily accessible location to constrain the regional scale spatial distribution of groundwater age.

  18. ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition.

    Science.gov (United States)

    Koslicki, David; Chatterjee, Saikat; Shahrivar, Damon; Walker, Alan W; Francis, Suzanna C; Fraser, Louise J; Vehkaperä, Mikko; Lan, Yueheng; Corander, Jukka

    2015-01-01

    Estimation of bacterial community composition from high-throughput sequenced 16S rRNA gene amplicons is a key task in microbial ecology. Since the sequence data from each sample typically consist of a large number of reads and are adversely impacted by different levels of biological and technical noise, accurate analysis of such large datasets is challenging. There has been a recent surge of interest in using compressed sensing inspired and convex-optimization based methods to solve the estimation problem for bacterial community composition. These methods typically rely on summarizing the sequence data by frequencies of low-order k-mers and matching this information statistically with a taxonomically structured database. Here we show that the accuracy of the resulting community composition estimates can be substantially improved by aggregating the reads from a sample with an unsupervised machine learning approach prior to the estimation phase. The aggregation of reads is a pre-processing approach where we use a standard K-means clustering algorithm that partitions a large set of reads into subsets with reasonable computational cost to provide several vectors of first order statistics instead of only single statistical summarization in terms of k-mer frequencies. The output of the clustering is then processed further to obtain the final estimate for each sample. The resulting method is called Aggregation of Reads by K-means (ARK), and it is based on a statistical argument via mixture density formulation. ARK is found to improve the fidelity and robustness of several recently introduced methods, with only a modest increase in computational complexity. An open source, platform-independent implementation of the method in the Julia programming language is freely available at https://github.com/dkoslicki/ARK. A Matlab implementation is available at http://www.ee.kth.se/ctsoftware.

  19. Nano-hardness estimation by means of Ar{sup +} ion etching

    Energy Technology Data Exchange (ETDEWEB)

    Bartali, R., E-mail: bartali@fbk.eu; Micheli, V.; Gottardi, G.; Vaccari, A.; Safeen, M.K.; Laidani, N.

    2015-08-31

    When the coatings are in nano-scale, the mechanical properties cannot be easily estimated by means of the conventional methods due to: tip shape, instrument resolution, roughness, and substrate effect. In this paper, we proposed a semi-empirical method to evaluate the mechanical properties of thin films based on the sputtering rate induced by bombardment of Ar{sup +} ion. The Ar{sup +} ion bombardment was induced by ion gun implemented in Auger electron spectroscopy (AES). This procedure has been applied on a series of coatings with different structure (carbon films) and a series of coating with a different density (ZnO thin films). The coatings were deposited on Silicon substrates by RF sputtering plasma. The results show that, as predicted by Insepov et al., there is a correlation between hardness and sputtering rate. Using reference materials and a simple power law equation the estimation of the nano-hardness using an Ar{sup +} beam is possible. - Highlights: • ZnO film and Carbon films were grown on silicon using PVD. • The growth temperature was room temperature. • The hardness of the coatings was estimated by means of nanoindentation. • Evaluation of resistance of materials to the mechanical damage induced by an Ar{sup +} ion gun (AES). • The hardness have been studied and a power law with the erosion rate has been found.

  20. A survey on OFDM channel estimation techniques based on denoising strategies

    Directory of Open Access Journals (Sweden)

    Pallaviram Sure

    2017-04-01

    Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.

  1. Estimation of muscle fatigue by ratio of mean frequency to average rectified value from surface electromyography.

    Science.gov (United States)

    Fernando, Jeffry Bonar; Yoshioka, Mototaka; Ozawa, Jun

    2016-08-01

    A new method to estimate muscle fatigue quantitatively from surface electromyography (EMG) is proposed. The ratio of mean frequency (MNF) to average rectified value (ARV) is used as the index of muscle fatigue, and muscle fatigue is detected when MNF/ARV falls below a pre-determined or pre-calculated baseline. MNF/ARV gives larger distinction between fatigued muscle and non-fatigued muscle. Experiment results show the effectiveness of our method in estimating muscle fatigue more correctly compared to conventional methods. An early evaluation based on the initial value of MNF/ARV and the subjective time when the subjects start feeling the fatigue also indicates the possibility of calculating baseline from the initial value of MNF/ARV.

  2. Least mean square fourth based microgrid state estimation algorithm using the internet of things technology.

    Science.gov (United States)

    Rana, Md Masud

    2017-01-01

    This paper proposes an innovative internet of things (IoT) based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations.

  3. Least mean square fourth based microgrid state estimation algorithm using the internet of things technology.

    Directory of Open Access Journals (Sweden)

    Md Masud Rana

    Full Text Available This paper proposes an innovative internet of things (IoT based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations.

  4. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  5. Estimates of the abundance of minke whales (Balaenoptera acutorostrata from Faroese and Icelandic NASS shipboard surveys

    Directory of Open Access Journals (Sweden)

    Daniel G Pike

    2009-09-01

    Full Text Available North Atlantic Sightings Surveys for cetaceans were carried out Northeast and Central Atlantic in 1987, 1989, 1995 and 2001. Here we provide estimates of density and abundance for minke whales from the Faroese and Icelandic ship surveys. The estimates are not corrected for availability or perception biases. Double platform data collected in 2001 indicates that perception bias is likely considerable for this species. However comparison of corrected estimates of densityfrom aerial surveys with a ship survey estimate from the same area suggests that ship surveys can be nearly unbiased under optimal survey conditions with high searching effort. There were some regional changes in density over the period but no overall changes in density and abundance. Given the recent catch history for minke whales in this area, we would not expect to see changes in abundance due to exploitation that would be detectable with these surveys.

  6. How can streamflow and climate-landscape data be used to estimate baseflow mean response time?

    Science.gov (United States)

    Zhang, Runrun; Chen, Xi; Zhang, Zhicai; Soulsby, Chris; Gao, Man

    2018-02-01

    Mean response time (MRT) is a metric describing the propagation of catchment hydraulic behavior that reflects both hydro-climatic conditions and catchment characteristics. To provide a comprehensive understanding of catchment response over a longer-time scale for hydraulic processes, the MRT function for baseflow generation was derived using an instantaneous unit hydrograph (IUH) model that describes the subsurface response to effective rainfall inputs. IUH parameters were estimated based on the "match test" between the autocorrelation function (ACFs) derived from the filtered base flow time series and from the IUH parameters, under the GLUE framework. Regionalization of MRT was conducted using estimates and hydroclimate-landscape indices in 22 sub-basins of the Jinghe River Basin (JRB) in the Loess Plateau of northwest China. Results indicate there is strong equifinality in determination of the best parameter sets but the median values of the MRT estimates are relatively stable in the acceptable range of the parameters. MRTs vary markedly over the studied sub-basins, ranging from tens of days to more than a year. Climate, topography and geomorphology were identified as three first-order controls on recharge-baseflow response processes. Human activities involving the cultivation of permanent crops may elongate the baseflow MRT and hence increase the dynamic storage. Cross validation suggests the model can be used to estimate MRTs in ungauged catchments in similar regions of throughout the Loess Plateau. The proposed method provides a systematic approach for MRT estimation and regionalization in terms of hydroclimate and catchment characteristics, which is helpful in the sustainable water resources utilization and ecological protection in the Loess Plateau.

  7. Estimates of mean consequences and confidence bounds on the mean associated with low-probability seismic events in total system performance assessments

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James

    2007-01-01

    An approach is described to estimate mean consequences and confidence bounds on the mean of seismic events with low probability of breaching components of the engineered barrier system. The approach is aimed at complementing total system performance assessment models used to understand consequences of scenarios leading to radionuclide releases in geologic nuclear waste repository systems. The objective is to develop an efficient approach to estimate mean consequences associated with seismic events of low probability, employing data from a performance assessment model with a modest number of Monte Carlo realizations. The derived equations and formulas were tested with results from a specific performance assessment model. The derived equations appear to be one method to estimate mean consequences without having to use a large number of realizations. (authors)

  8. Mechanisms Controlling Global Mean Sea Surface Temperature Determined From a State Estimate

    Science.gov (United States)

    Ponte, R. M.; Piecuch, C. G.

    2018-04-01

    Global mean sea surface temperature (T¯) is a variable of primary interest in studies of climate variability and change. The temporal evolution of T¯ can be influenced by surface heat fluxes (F¯) and by diffusion (D¯) and advection (A¯) processes internal to the ocean, but quantifying the contribution of these different factors from data alone is prone to substantial uncertainties. Here we derive a closed T¯ budget for the period 1993-2015 based on a global ocean state estimate, which is an exact solution of a general circulation model constrained to most extant ocean observations through advanced optimization methods. The estimated average temperature of the top (10-m thick) level in the model, taken to represent T¯, shows relatively small variability at most time scales compared to F¯, D¯, or A¯, reflecting the tendency for largely balancing effects from all the latter terms. The seasonal cycle in T¯ is mostly determined by small imbalances between F¯ and D¯, with negligible contributions from A¯. While D¯ seems to simply damp F¯ at the annual period, a different dynamical role for D¯ at semiannual period is suggested by it being larger than F¯. At periods longer than annual, A¯ contributes importantly to T¯ variability, pointing to the direct influence of the variable ocean circulation on T¯ and mean surface climate.

  9. Estimating Horizontal Displacement between DEMs by Means of Particle Image Velocimetry Techniques

    Directory of Open Access Journals (Sweden)

    Juan F. Reinoso

    2015-12-01

    Full Text Available To date, digital terrain model (DTM accuracy has been studied almost exclusively by computing its height variable. However, the largely ignored horizontal component bears a great influence on the positional accuracy of certain linear features, e.g., in hydrological features. In an effort to fill this gap, we propose a means of measurement different from the geomatic approach, involving fluid mechanics (water and air flows or aerodynamics. The particle image velocimetry (PIV algorithm is proposed as an estimator of horizontal differences between digital elevation models (DEM in grid format. After applying a scale factor to the displacement estimated by the PIV algorithm, the mean error predicted is around one-seventh of the cell size of the DEM with the greatest spatial resolution, and around one-nineteenth of the cell size of the DEM with the least spatial resolution. Our methodology allows all kinds of DTMs to be compared once they are transformed into DEM format, while also allowing comparison of data from diverse capture methods, i.e., LiDAR versus photogrammetric data sources.

  10. INCLUSION RATIO BASED ESTIMATOR FOR THE MEAN LENGTH OF THE BOOLEAN LINE SEGMENT MODEL WITH AN APPLICATION TO NANOCRYSTALLINE CELLULOSE

    Directory of Open Access Journals (Sweden)

    Mikko Niilo-Rämä

    2014-06-01

    Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.

  11. Estimation of Finite Population Mean in Multivariate Stratified Sampling under Cost Function Using Goal Programming

    Directory of Open Access Journals (Sweden)

    Atta Ullah

    2014-01-01

    Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.

  12. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  13. Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys

    Science.gov (United States)

    Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik

    2011-01-01

    The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...

  14. Using survey data on inflation expectations in the estimation of learning and rational expectations models

    NARCIS (Netherlands)

    Ormeño, A.

    2012-01-01

    Do survey data on inflation expectations contain useful information for estimating macroeconomic models? I address this question by using survey data in the New Keynesian model by Smets and Wouters (2007) to estimate and compare its performance when solved under the assumptions of Rational

  15. Estimating daily minimum, maximum, and mean near surface air temperature using hybrid satellite models across Israel.

    Science.gov (United States)

    Rosenfeld, Adar; Dorman, Michael; Schwartz, Joel; Novack, Victor; Just, Allan C; Kloog, Itai

    2017-11-01

    Meteorological stations measure air temperature (Ta) accurately with high temporal resolution, but usually suffer from limited spatial resolution due to their sparse distribution across rural, undeveloped or less populated areas. Remote sensing satellite-based measurements provide daily surface temperature (Ts) data in high spatial and temporal resolution and can improve the estimation of daily Ta. In this study we developed spatiotemporally resolved models which allow us to predict three daily parameters: Ta Max (day time), 24h mean, and Ta Min (night time) on a fine 1km grid across the state of Israel. We used and compared both the Aqua and Terra MODIS satellites. We used linear mixed effect models, IDW (inverse distance weighted) interpolations and thin plate splines (using a smooth nonparametric function of longitude and latitude) to first calibrate between Ts and Ta in those locations where we have available data for both and used that calibration to fill in neighboring cells without surface monitors or missing Ts. Out-of-sample ten-fold cross validation (CV) was used to quantify the accuracy of our predictions. Our model performance was excellent for both days with and without available Ts observations for both Aqua and Terra (CV Aqua R 2 results for min 0.966, mean 0.986, and max 0.967; CV Terra R 2 results for min 0.965, mean 0.987, and max 0.968). Our research shows that daily min, mean and max Ta can be reliably predicted using daily MODIS Ts data even across Israel, with high accuracy even for days without Ta or Ts data. These predictions can be used as three separate Ta exposures in epidemiology studies for better diurnal exposure assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Estimation of mean glandular dose for patients who undergo mammography and studying the factors affecting it

    Science.gov (United States)

    Barzanje, Sana L. N. H.; Harki, Edrees M. Tahir Nury

    2017-09-01

    The objective of this study was to determine mean glandular dose (MGD) during diagnostic mammography. This study was done in two hospitals in Hawler city in Kurdistan -region /Iraq, the exposure parameters kVp and mAs was recorded for 40 patients under go mammography. The MGD estimated by multiplied ESD with normalized glandular dose (Dn). The ESD measured indirectly by measuring output radiation mGy/mAs by using PalmRAD 907 as a suitable detector (Gigger detector).the results; shown that the mean and its standard deviation of MGD for Screen Film Mammography and Digital Mammography are (0.95±0.18)mGy and (0.99±0.26)mGy, respectively. And there is a significant difference between MGD for Screen Film Mammography and Digital Mammography views (p≤0. 05). Also the mean value and its standard deviation of MGD for screen film mammography is (0.96±0.21) for CC projection and (1.03±0.3) mGy for MLO projection, but mean value and its standard deviation evaluated of MGD for digital mammography is (0.92±0.17) mGy for CC projection and (0.98±0.2) mGy for MLO projection. As well as, the effect of kVp and mAs in MGD were studied, shows that in general as kVp and mAs increased the MGD increased accordingly in both of mammography systems.

  17. Estimated rate of agricultural injury: the Korean Farmers’ Occupational Disease and Injury Survey

    OpenAIRE

    Chae, Hyeseon; Min, Kyungdoo; Youn, kanwoo; Park, Jinwoo; Kim, Kyungran; Kim, Hyocher; Lee, Kyungsuk

    2014-01-01

    Objectives This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. Methods The first Korean Farmers’ Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. ...

  18. Utility Estimation for Pediatric Vesicoureteral Reflux: Methodological Considerations Using an Online Survey Platform.

    Science.gov (United States)

    Tejwani, Rohit; Wang, Hsin-Hsiao S; Lloyd, Jessica C; Kokorowski, Paul J; Nelson, Caleb P; Routh, Jonathan C

    2017-03-01

    The advent of online task distribution has opened a new avenue for efficiently gathering community perspectives needed for utility estimation. Methodological consensus for estimating pediatric utilities is lacking, with disagreement over whom to sample, what perspective to use (patient vs parent) and whether instrument induced anchoring bias is significant. We evaluated what methodological factors potentially impact utility estimates for vesicoureteral reflux. Cross-sectional surveys using a time trade-off instrument were conducted via the Amazon Mechanical Turk® (https://www.mturk.com) online interface. Respondents were randomized to answer questions from child, parent or dyad perspectives on the utility of a vesicoureteral reflux health state and 1 of 3 "warm-up" scenarios (paralysis, common cold, none) before a vesicoureteral reflux scenario. Utility estimates and potential predictors were fitted to a generalized linear model to determine what factors most impacted utilities. A total of 1,627 responses were obtained. Mean respondent age was 34.9 years. Of the respondents 48% were female, 38% were married and 44% had children. Utility values were uninfluenced by child/personal vesicoureteral reflux/urinary tract infection history, income or race. Utilities were affected by perspective and were higher in the child group (34% lower in parent vs child, p pediatric conditions. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  19. Improvement Schemes for Indoor Mobile Location Estimation: A Survey

    Directory of Open Access Journals (Sweden)

    Jianga Shang

    2015-01-01

    Full Text Available Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hidden Markov models. Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR, and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research.

  20. Aerial Survey as a Tool to Estimate Abundance and Describe Distribution of a Carcharhinid Species, the Lemon Shark, Negaprion brevirostris

    Directory of Open Access Journals (Sweden)

    S. T. Kessel

    2013-01-01

    Full Text Available Aerial survey provides an important tool to assess the abundance of both terrestrial and marine vertebrates. To date, limited work has tested the effectiveness of this technique to estimate the abundance of smaller shark species. In Bimini, Bahamas, the lemon shark (Negaprion brevirostris shows high site fidelity to a shallow sandy lagoon, providing an ideal test species to determine the effectiveness of localised aerial survey techniques for a Carcharhinid species in shallow subtropical waters. Between September 2007 and September 2008, visual surveys were conducted from light aircraft following defined transects ranging in length between 8.8 and 4.4 km. Count results were corrected for “availability”, “perception”, and “survey intensity” to provide unbiased abundance estimates. The abundance of lemon sharks was greatest in the central area of the lagoon during high tide, with a change in abundance distribution to the east and western regions of the lagoon with low tide. Mean abundance of sharks was estimated at 49 (±8.6 individuals, and monthly abundance was significantly positively correlated with mean water temperature. The successful implementation of the aerial survey technique highlighted the potential of further employment for shark abundance assessments in shallow coastal marine environments.

  1. Mean of Microaccelerations Estimate in the Small Spacecraft Internal Environment with the Use of Fuzzy Sets

    Science.gov (United States)

    Sedelnikov, A. V.

    2018-05-01

    Assessment of parameters of rotary motion of the small spacecraft around its center of mass and of microaccelerations using measurements of current from silicon photocells is carried out. At the same time there is a problem of interpretation of ambiguous telemetric data. Current from two opposite sides of the small spacecraft is significant. The mean of removal of such uncertainty is considered. It is based on an fuzzy set. As membership function it is offered to use a normality condition of the direction cosines. The example of uncertainty removal for a prototype of the Aist small spacecraft is given. The offered approach can significantly increase the accuracy of microaccelerations estimate when using measurements of current from silicon photocells.

  2. Estimation of Genetic Effects from Generation Means in Maize (Zea mays L.)

    International Nuclear Information System (INIS)

    Ligeyo, D.O.; Ayiecho, P.O.

    1999-01-01

    Estimates of mean, additive, dominance, additive * additive, additive * dominance and dominance * dominance genetic effects were obtained for six crosses from four inbred lines of maize for grain yield. All the genetic effects contributed to the inheritance of yield. However not all genetic effects are present in all crosses at all locations. Both additive dominance genetic effects were responsible for the manifestation variability in grain yield, though the dominance genetic effect was preponderant in all cases. In most cases additive * additive and additive * dominance effects were more important contributors to inheritance than dominance * dominance gene effects at all locations.In all cases the manifestation of various genetic effects varied according to crosses and experimental sites

  3. Mourning dove population trend estimates from Call-Count and North American Breeding Bird Surveys

    Science.gov (United States)

    Sauer, J.R.; Dolton, D.D.; Droege, S.

    1994-01-01

    The mourning dove (Zenaida macroura) Callcount Survey and the North American Breeding Bird Survey provide information on population trends of mourning doves throughout the continental United States. Because surveys are an integral part of the development of hunting regulations, a need exists to determine which survey provides precise information. We estimated population trends from 1966 to 1988 by state and dove management unit, and assessed the relative efficiency of each survey. Estimates of population trend differ (P lt 0.05) between surveys in 11 of 48 states; 9 of 11 states with divergent results occur in the Eastern Management Unit. Differences were probably a consequence of smaller sample sizes in the Callcount Survey. The Breeding Bird Survey generally provided trend estimates with smaller variances than did the Callcount Survey. Although the Callcount Survey probably provides more withinroute accuracy because of survey methods and timing, the Breeding Bird Survey has a larger sample size of survey routes and greater consistency of coverage in the Eastern Unit.

  4. Modeling Site Heterogeneity with Posterior Mean Site Frequency Profiles Accelerates Accurate Phylogenomic Estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Minh, Bui Quang; Susko, Edward; Roger, Andrew J

    2018-03-01

    Proteins have distinct structural and functional constraints at different sites that lead to site-specific preferences for particular amino acid residues as the sequences evolve. Heterogeneity in the amino acid substitution process between sites is not modeled by commonly used empirical amino acid exchange matrices. Such model misspecification can lead to artefacts in phylogenetic estimation such as long-branch attraction. Although sophisticated site-heterogeneous mixture models have been developed to address this problem in both Bayesian and maximum likelihood (ML) frameworks, their formidable computational time and memory usage severely limits their use in large phylogenomic analyses. Here we propose a posterior mean site frequency (PMSF) method as a rapid and efficient approximation to full empirical profile mixture models for ML analysis. The PMSF approach assigns a conditional mean amino acid frequency profile to each site calculated based on a mixture model fitted to the data using a preliminary guide tree. These PMSF profiles can then be used for in-depth tree-searching in place of the full mixture model. Compared with widely used empirical mixture models with $k$ classes, our implementation of PMSF in IQ-TREE (http://www.iqtree.org) speeds up the computation by approximately $k$/1.5-fold and requires a small fraction of the RAM. Furthermore, this speedup allows, for the first time, full nonparametric bootstrap analyses to be conducted under complex site-heterogeneous models on large concatenated data matrices. Our simulations and empirical data analyses demonstrate that PMSF can effectively ameliorate long-branch attraction artefacts. In some empirical and simulation settings PMSF provided more accurate estimates of phylogenies than the mixture models from which they derive.

  5. Methods for estimating the occurrence of polypharmacy by means of a prescription database

    DEFF Research Database (Denmark)

    Bjerrum, L; Rosholm, J U; Hallas, J

    1997-01-01

    to equal the amount of drug purchased, as measured in defined daily doses (DDD), thereby assuming a daily intake of one DDD. PP was defined as overlapping periods of consumption for different drugs. A Venn diagram was used to illustrate and compare this estimator of PP with two other indicators of multiple......-drug use: the number of drugs purchased in 3 months and the mean number of drugs used in 1 year. A receiver operating curve (ROC) was used to evaluate the possibility of predicting episodes of PP from the number of drugs purchased in 3 months. RESULTS: The proposed estimator of PP was robust towards...... for the first time in 1994 stabilised after approximately 6 months, resulting in an incidence of major PP of 0.2% and of minor PP of 1.2% per month. For individuals exposed to PP, the median number of days of exposure was 61 and 10.5% were exposed for more than 350 days of the year. Purchase of five or more...

  6. A spectral chart method for estimating the mean turbulent kinetic energy dissipation rate

    Science.gov (United States)

    Djenidi, L.; Antonia, R. A.

    2012-10-01

    We present an empirical but simple and practical spectral chart method for determining the mean turbulent kinetic energy dissipation rate DNS spectra, points to this scaling being also valid at small Reynolds numbers, provided effects due to inhomogeneities in the flow are negligible. The methods avoid the difficulty associated with estimating time or spatial derivatives of the velocity fluctuations. It also avoids using the second hypothesis of K41, which implies the existence of a -5/3 inertial subrange only when the Taylor microscale Reynods number R λ is sufficiently large. The method is in fact applied to the lower wavenumber end of the dissipative range thus avoiding most of the problems due to inadequate spatial resolution of the velocity sensors and noise associated with the higher wavenumber end of this range.The use of spectral data (30 ≤ R λ ≤ 400) in both passive and active grid turbulence, a turbulent mixing layer and the turbulent wake of a circular cylinder indicates that the method is robust and should lead to reliable estimates of < \\varepsilon rangle in flows or flow regions where the first similarity hypothesis should hold; this would exclude, for example, the region near a wall.

  7. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  8. Insights from Machine Learning for Evaluating Production Function Estimators on Manufacturing Survey Data

    OpenAIRE

    Arreola, José Luis Preciado; Johnson, Andrew L.

    2016-01-01

    Organizations like census bureaus rely on non-exhaustive surveys to estimate industry population-level production functions. In this paper we propose selecting an estimator based on a weighting of its in-sample and predictive performance on actual application datasets. We compare Cobb-Douglas functional assumptions to existing nonparametric shape constrained estimators and a newly proposed estimated presented in this paper. For simulated data, we find that our proposed estimator has the lowes...

  9. Estimation and correction of visibility bias in aerial surveys of wintering ducks

    Science.gov (United States)

    Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.

    2008-01-01

    Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.

  10. Review of Estimation Methods for Landline and Cell Phone Surveys

    Science.gov (United States)

    Arcos, Antonio; del Mar Rueda, María; Trujillo, Manuel; Molina, David

    2015-01-01

    The rapid proliferation of cell phone use and the accompanying decline in landline service in recent years have resulted in substantial potential for coverage bias in landline random-digit-dial telephone surveys, which has led to the implementation of dual-frame designs that incorporate both landline and cell phone samples. Consequently,…

  11. Bathymetric survey and estimation of the water balance of Lake ...

    African Journals Online (AJOL)

    Quantification of the water balance components and bathymetric survey is very crucial for sustainable management of lake waters. This paper focuses on the bathymetry and the water balance of the crater Lake Ardibo, recently utilized for irrigation. The bathymetric map of the lake is established at a contour interval of 10 ...

  12. Mean temperature of the catch (MTC in the Greek Seas based on landings and survey data

    Directory of Open Access Journals (Sweden)

    Athanassios C. Tsikliras

    2015-04-01

    Full Text Available The mean temperature of the catch (MTC, which is the average inferred temperature preference of the exploited species weighted by their annual catch, is an index that has been used for evaluating the effect of sea warming on marine ecosystems. In the present work, we examined the effect of sea surface temperature on the catch composition of the Greek Seas using the MTC applied on the official catch statistics (landings for the period 1970-2010 (Aegean and Ionian Seas and on experimental bottom trawl survey data for 1997-2014 (southern Aegean Sea. The MTC of the landings for the study period increased from 11.8 οC to 16.2 οC in the Aegean Sea and from 10.0 οC to 14.7 οC in the Ionian Sea. Overall, the rate of MTC increase was 1.01 οC per decade for the Aegean and 1.17 οC per decade for the Ionian Sea and was positively related to sea surface temperature anomalies in both areas. For the survey data, the increase of the MTC of the bottom trawl catch in the southern Aegean Sea was lower (0.51 οC per decade but referred to a shorter time frame and included only demersal species. The change in MTC of official and survey catches indicates that the relative catch proportions of species preferring warmer waters and those preferring colder waters have changed in favour of the former and that this change is linked to sea surface temperature increase, both internally (through the Atlantic Multidecadal Oscillation or externally (warming trend driven.

  13. bathymetric survey and estimation of the water balance of lake

    African Journals Online (AJOL)

    Preferred Customer

    The average annual open water evaporation, estimated from Colorado Class-A Pan records and Penman modified method is 23.49 million cubic .... Therefore, the ∆S term in equation 2 can be replaced by the net unmeasured ground .... appears that the steady-state water balance is reasonable. Because, the residual value ...

  14. Evaluation of scale invariance in physiological signals by means of balanced estimation of diffusion entropy

    Science.gov (United States)

    Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong

    2012-11-01

    By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (˜0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (˜0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (˜102), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.

  15. A spectral chart method for estimating the mean turbulent kinetic energy dissipation rate

    Energy Technology Data Exchange (ETDEWEB)

    Djenidi, L.; Antonia, R.A. [The University of Newcastle, School of Engineering, Newcastle, NSW (Australia)

    2012-10-15

    We present an empirical but simple and practical spectral chart method for determining the mean turbulent kinetic energy dissipation rate left angle {epsilon}right angle in a variety of turbulent flows. The method relies on the validity of the first similarity hypothesis of Kolmogorov (C R (Doklady) Acad Sci R R SS, NS 30:301-305, 1941) (or K41) which implies that spectra of velocity fluctuations scale on the kinematic viscosity {nu} and left angle {epsilon}right angle at large Reynolds numbers. However, the evidence, based on the DNS spectra, points to this scaling being also valid at small Reynolds numbers, provided effects due to inhomogeneities in the flow are negligible. The methods avoid the difficulty associated with estimating time or spatial derivatives of the velocity fluctuations. It also avoids using the second hypothesis of K41, which implies the existence of a -5/3 inertial subrange only when the Taylor microscale Reynolds number R{sub {lambda}} is sufficiently large. The method is in fact applied to the lower wavenumber end of the dissipative range thus avoiding most of the problems due to inadequate spatial resolution of the velocity sensors and noise associated with the higher wavenumber end of this range.The use of spectral data (30 {<=} R{sub {lambda}}{<=} 400) in both passive and active grid turbulence, a turbulent mixing layer and the turbulent wake of a circular cylinder indicates that the method is robust and should lead to reliable estimates of left angle {epsilon}right angle in flows or flow regions where the first similarity hypothesis should hold; this would exclude, for example, the region near a wall. (orig.)

  16. Estimation of light commercial vehicles dynamics by means of HIL-testbench simulation

    Science.gov (United States)

    Groshev, A.; Tumasov, A.; Toropov, E.; Sereda, P.

    2018-02-01

    The high level of active safety of vehicles is impossible without driver assistance electronic systems. Electronic stability control (ESC) system is one of them. Nowadays such systems are obligatory for installation on vehicles of different categories. The approval of active safety level of vehicles with ESC is possible by means of high speed road tests. The most frequently implemented tests are “fish hook” and “sine with dwell” tests. Such kind of tests provided by The Global technical regulation No. 8 are published by the United Nations Economic Commission for Europe as well as by ECE 13-11. At the same time, not only road tests could be used for estimation of vehicles dynamics. Modern software and hardware technologies allow imitating real tests with acceptable reliability and good convergence between real test data and simulation results. ECE 13-11 Annex 21 - Appendix 1 “Use Of The Dynamic Stability Simulation” regulates demands for special Simulation Test bench that could be used not only for preliminary estimation of vehicles dynamics, but also for official vehicles homologation. This paper describes the approach, proposed by the researchers from Nizhny Novgorod State Technical University n.a. R.E. Alekseev (NNSTU, Russia) with support of engineers of United Engineering Center GAZ Group, as well as specialists of Gorky Automobile Plant. The idea of approach is to use the special HIL (hardware in the loop) -test bench, that consists of Real Time PC with Real Time Software and braking system components including electronic control unit (ECU) of ESC system. The HIL-test bench allows imitating vehicle dynamics in condition of “fish hook” and “sine with dwell” tests. The paper describes the scheme and structure of HIL-test bench and some peculiarities that should be taken into account during HIL-simulation.

  17. Estimation of total catch of silver kob Argyrosomus inodorus by recreational shore-anglers in Namibia using a roving-roving creel survey

    DEFF Research Database (Denmark)

    Kirchner, C.H.; Beyer, Jan

    1999-01-01

    , using data taken during a survey from 1 October 1995 to 30 September 1996. Two different methods of estimating daily catch were tested by sampling the same population of anglers using a complete and an incomplete survey. The mean rate estimator, calculated by the ratio of the means with progressive......A statistical sampling method is described to estimate the annual catch of silver kob Agryrosomus indorus by recreational shore-anglers in Namibia. The method is based on the theory of progressive counts and on-site roving interviews of anglers, with catch counts and measurements at interception...

  18. Mean Green operators of deformable fiber networks embedded in a compliant matrix and property estimates

    Science.gov (United States)

    Franciosi, Patrick; Spagnuolo, Mario; Salman, Oguz Umut

    2018-04-01

    Composites comprising included phases in a continuous matrix constitute a huge class of meta-materials, whose effective properties, whether they be mechanical, physical or coupled, can be selectively optimized by using appropriate phase arrangements and architectures. An important subclass is represented by "network-reinforced matrices," say those materials in which one or more of the embedded phases are co-continuous with the matrix in one or more directions. In this article, we present a method to study effective properties of simple such structures from which more complex ones can be accessible. Effective properties are shown, in the framework of linear elasticity, estimable by using the global mean Green operator for the entire embedded fiber network which is by definition through sample spanning. This network operator is obtained from one of infinite planar alignments of infinite fibers, which the network can be seen as an interpenetrated set of, with the fiber interactions being fully accounted for in the alignments. The mean operator of such alignments is given in exact closed form for isotropic elastic-like or dielectric-like matrices. We first exemplify how these operators relevantly provide, from classic homogenization frameworks, effective properties in the case of 1D fiber bundles embedded in an isotropic elastic-like medium. It is also shown that using infinite patterns with fully interacting elements over their whole influence range at any element concentration suppresses the dilute approximation limit of these frameworks. We finally present a construction method for a global operator of fiber networks described as interpenetrated such bundles.

  19. What does it mean to manage sky survey data? A model to facilitate stakeholder conversations

    Science.gov (United States)

    Sands, Ashley E.; Darch, Peter T.

    2016-06-01

    Astronomy sky surveys, while of great scientific value independently, can be deployed even more effectively when multiple sources of data are combined. Integrating discrete datasets is a non-trivial exercise despite investments in standard data formats and tools. Creating and maintaining data and associated infrastructures requires investments in technology and expertise. Combining data from multiple sources necessitates a common understanding of data, structures, and goals amongst relevant stakeholders.We present a model of Astronomy Stakeholder Perspectives on Data. The model is based on 80 semi-structured interviews with astronomers, computational astronomers, computer scientists, and others involved in the building or use of the Sloan Digital Sky Survey (SDSS) and Large Synoptic Survey Telescope (LSST). Interviewees were selected to ensure a range of roles, institutional affiliations, career stages, and level of astronomy education. Interviewee explanations of data were analyzed to understand how perspectives on astronomy data varied by stakeholder.Interviewees described sky survey data either intrinsically or extrinsically. “Intrinsic” descriptions of data refer to data as an object in and of itself. Respondents with intrinsic perspectives view data management in one of three ways: (1) “Medium” - securing the zeros and ones from bit rot; (2) “Scale” - assuring that changes in state are documented; or (3) “Content” - ensuring the scientific validity of the images, spectra, and catalogs.“Extrinsic” definitions, in contrast, define data in relation to other forms of information. Respondents with extrinsic perspectives view data management in one of three ways: (1) “Source” - supporting the integrity of the instruments and documentation; (2) “Relationship” - retaining relationships between data and their analytical byproducts; or (3) “Use” - ensuring that data remain scientifically usable.This model shows how data management can

  20. Sample Loss and Survey Bias in Estimates of Social Security Beneficiaries: A Tale of Two Surveys.

    OpenAIRE

    John L. Czajka; James Mabli; Scott Cody

    2008-01-01

    Data from the Census Bureau’s Survey of Income and Program Participation (SIPP) and the Current Population Survey (CPS) provide information on current and potential beneficiaries served by Social Security Administration (SSA) programs. SSA also links administrative records to the records of survey respondents who provide Social Security numbers. These matched data expand the content of the SIPP and CPS files to fields available only through SSA and Internal Revenue Service records—such as l...

  1. A Pooled Mean Group Estimation of Capital Inflow and Growth in sub Saharan Africa

    Directory of Open Access Journals (Sweden)

    Chimere Okechukwu Iheonu

    2017-09-01

    Full Text Available This study empirically analysed the impact of capital inflow on growth in sub Saharan Africa employing the Pooled Mean Group estimator from the years 1985 to 2015. The study utilised Foreign Direct Investment (FDI, Official Development Assistance and Foreign Aid (ODA and Remittance (REM as indicators of capital inflow. Short run result indicates that the various forms of capital inflow do not have significant impact on growth but while FDI and REM were negatively related to growth, AID was positively related to growth. However, in the long run, FDI and AID have a positive and significant impact on growth while REM has a negative and significant impact on growth in sub Saharan Africa. The study concludes that planning and legislative lags are inferential from the insignificance of capital inflows on growth in the short run, while growth falls in the long run as a result of increase in labour’s income earned in diaspora. The study recommends that in the short run, economic policies should be tailored towards the development of technological based services while in the long run, government should create schemes for citizens in diaspora to participate in, to endear sector-specific economic activities as well as targeting FDI and AID as policy options to spur growth. Finally, specific capital inflow policy options should be employed as not all forms of capital inflow precipitates growth.

  2. Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall

    Science.gov (United States)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-11-01

    The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.

  3. THE ASSESSMENT OF GEOTHERMAL POTENTIAL OF TURKEY BY MEANS OF HEAT FLOW ESTIMATION

    Directory of Open Access Journals (Sweden)

    UĞUR AKIN

    2014-12-01

    Full Text Available In this study, the heat flow distribution of Turkey was investigated in the interest ofexploring new geothermal fields in addition to known ones. For this purposes, thegeothermal gradient was estimated from the Curie point depth map obtained from airbornemagnetic data by means of power spectrum method. By multiplying geothermal gradientwith thermal conductivity values, the heat flow map of Turkey was obtained. The averagevalue in the heat flow map of Turkey was determined as 74 mW/m2. It points out existenceof resources of geothermal energy larger than the average of the world resources. in termsof geothermal potential, the most significant region of Turkey is the Aydin and itssurrounding with a value exceeding 200 mW/m2. On the contrary, the value decreasesbelow 30 mW/m2in the region bordered by Aksaray, Niğde, Karaman and Konya. Thenecessity of conducting a detailed additional studies for East Black sea, East and SoutheastAnatolia is also revealed

  4. Results and evaluation of a survey to estimate Pacific walrus population size, 2006

    Science.gov (United States)

    Speckman, Suzann G.; Chernook, Vladimir I.; Burn, Douglas M.; Udevitz, Mark S.; Kochnev, Anatoly A.; Vasilev, Alexander; Jay, Chadwick V.; Lisovsky, Alexander; Fischbach, Anthony S.; Benter, R. Bradley

    2011-01-01

    In spring 2006, we conducted a collaborative U.S.-Russia survey to estimate abundance of the Pacific walrus (Odobenus rosmarus divergens). The Bering Sea was partitioned into survey blocks, and a systematic random sample of transects within a subset of the blocks was surveyed with airborne thermal scanners using standard strip-transect methodology. Counts of walruses in photographed groups were used to model the relation between thermal signatures and the number of walruses in groups, which was used to estimate the number of walruses in groups that were detected by the scanner but not photographed. We also modeled the probability of thermally detecting various-sized walrus groups to estimate the number of walruses in groups undetected by the scanner. We used data from radio-tagged walruses to adjust on-ice estimates to account for walruses in the water during the survey. The estimated area of available habitat averaged 668,000 km2 and the area of surveyed blocks was 318,204 km2. The number of Pacific walruses within the surveyed area was estimated at 129,000 with 95% confidence limits of 55,000 to 507,000 individuals. This value can be used by managers as a minimum estimate of the total population size.

  5. Note on an Identity Between Two Unbiased Variance Estimators for the Grand Mean in a Simple Random Effects Model.

    Science.gov (United States)

    Levin, Bruce; Leu, Cheng-Shiun

    2013-01-01

    We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.

  6. Comparing two survey methods for estimating maternal and perinatal mortality in rural Cambodia.

    Science.gov (United States)

    Chandy, Hoeuy; Heng, Yang Van; Samol, Ha; Husum, Hans

    2008-03-01

    We need solid estimates of maternal mortality rates (MMR) to monitor the impact of maternal care programs. Cambodian health authorities and WHO report the MMR in Cambodia at 450 per 100,000 live births. The figure is drawn from surveys where information is obtained by interviewing respondents about the survival of all their adult sisters (sisterhood method). The estimate is statistically imprecise, 95% confidence intervals ranging from 260 to 620/100,000. The MMR estimate is also uncertain due to under-reporting; where 80-90% of women deliver at home maternal fatalities may go undetected especially where mortality is highest, in remote rural areas. The aim of this study was to attain more reliable MMR estimates by using survey methods other than the sisterhood method prior to an intervention targeting obstetric rural emergencies. The study was carried out in rural Northwestern Cambodia where access to health services is poor and poverty, endemic diseases, and land mines are endemic. Two survey methods were applied in two separate sectors: a community-based survey gathering data from public sources and a household survey gathering data direct from primary sources. There was no statistically significant difference between the two survey results for maternal deaths, both types of survey reported mortality rates around the public figure. The household survey reported a significantly higher perinatal mortality rate as compared to the community-based survey, 8.6% versus 5.0%. Also the household survey gave qualitative data important for a better understanding of the many problems faced by mothers giving birth in the remote villages. There are detection failures in both surveys; the failure rate may be as high as 30-40%. PRINCIPLE CONCLUSION: Both survey methods are inaccurate, therefore inappropriate for evaluation of short-term changes of mortality rates. Surveys based on primary informants yield qualitative information about mothers' hardships important for the design

  7. Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers and Improve Data-Limited Stock Assessments. This biosampling project...

  8. Generating Health Estimates by Zip Code: A Semiparametric Small Area Estimation Approach Using the California Health Interview Survey.

    Science.gov (United States)

    Wang, Yueyan; Ponce, Ninez A; Wang, Pan; Opsomer, Jean D; Yu, Hongjian

    2015-12-01

    We propose a method to meet challenges in generating health estimates for granular geographic areas in which the survey sample size is extremely small. Our generalized linear mixed model predicts health outcomes using both individual-level and neighborhood-level predictors. The model's feature of nonparametric smoothing function on neighborhood-level variables better captures the association between neighborhood environment and the outcome. Using 2011 to 2012 data from the California Health Interview Survey, we demonstrate an empirical application of this method to estimate the fraction of residents without health insurance for Zip Code Tabulation Areas (ZCTAs). Our method generated stable estimates of uninsurance for 1519 of 1765 ZCTAs (86%) in California. For some areas with great socioeconomic diversity across adjacent neighborhoods, such as Los Angeles County, the modeled uninsured estimates revealed much heterogeneity among geographically adjacent ZCTAs. The proposed method can increase the value of health surveys by providing modeled estimates for health data at a granular geographic level. It can account for variations in health outcomes at the neighborhood level as a result of both socioeconomic characteristics and geographic locations.

  9. Estimation of the Mean of a Univariate Normal Distribution When the Variance is not Known

    NARCIS (Netherlands)

    Danilov, D.L.; Magnus, J.R.

    2002-01-01

    We consider the problem of estimating the first k coeffcients in a regression equation with k + 1 variables.For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002).We investigate properties of this estimator in

  10. Estimation of the mean of a univariate normal distribution when the variance is not known

    NARCIS (Netherlands)

    Danilov, Dmitri

    2005-01-01

    We consider the problem of estimating the first k coefficients in a regression equation with k+1 variables. For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002). We generalize this estimator to the case

  11. A method to combine non-probability sample data with probability sample data in estimating spatial means of environmental variables

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    2003-01-01

    In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be

  12. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    Science.gov (United States)

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  13. Estimating international interindustry linkages : Non-survey simulations of the Asian-Pacific economy

    NARCIS (Netherlands)

    Oosterhaven, J.; Stelder, T.M.

    2008-01-01

    This paper evaluates a recently published semi-survey international input-output table for nine East-Asian countries and the USA with four non-survey estimation alternatives. A new generalized RAS procedure is used with stepwise increasing information from both import and export statistics as

  14. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    Science.gov (United States)

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  15. An estimation of the transformation value by means of the estimation function. Market Comparison Approach with abridged data chart

    Directory of Open Access Journals (Sweden)

    Maurizio d’Amato

    2015-06-01

    Full Text Available This essay suggests a re-elaboration of the Marketing Comparison Approach in order to set the value of properties subject to transformation. The essay focuses on identifying the property valuation following a certain transformation and is aimed at determining the land value by means of the extraction method. The outcome, based on trading data and a study case in the province of Bari may also be applied to under construction properties valuation and to the identification of the value of properties under construction, (investment property under construction by means of the Future Value method.

  16. Estimating Single and Multiple Target Locations Using K-Means Clustering with Radio Tomographic Imaging in Wireless Sensor Networks

    Science.gov (United States)

    2015-03-26

    clustering is an algorithm that has been used in data mining applications such as machine learning applications , pattern recognition, hyper-spectral imagery...42 3.7.2 Application of K-means Clustering . . . . . . . . . . . . . . . . . 42 3.8 Experiment Design...Tomographic Imaging WLAN Wireless Local Area Networks WSN Wireless Sensor Network xx ESTIMATING SINGLE AND MULTIPLE TARGET LOCATIONS USING K-MEANS CLUSTERING

  17. Aerial surveys adjusted by ground surveys to estimate area occupied by black-tailed prairie dog colonies

    Science.gov (United States)

    Sidle, John G.; Augustine, David J.; Johnson, Douglas H.; Miller, Sterling D.; Cully, Jack F.; Reading, Richard P.

    2012-01-01

    Aerial surveys using line-intercept methods are one approach to estimate the extent of prairie dog colonies in a large geographic area. Although black-tailed prairie dogs (Cynomys ludovicianus) construct conspicuous mounds at burrow openings, aerial observers have difficulty discriminating between areas with burrows occupied by prairie dogs (colonies) versus areas of uninhabited burrows (uninhabited colony sites). Consequently, aerial line-intercept surveys may overestimate prairie dog colony extent unless adjusted by an on-the-ground inspection of a sample of intercepts. We compared aerial line-intercept surveys conducted over 2 National Grasslands in Colorado, USA, with independent ground-mapping of known black-tailed prairie dog colonies. Aerial line-intercepts adjusted by ground surveys using a single activity category adjustment overestimated colonies by ≥94% on the Comanche National Grassland and ≥58% on the Pawnee National Grassland. We present a ground-survey technique that involves 1) visiting on the ground a subset of aerial intercepts classified as occupied colonies plus a subset of intercepts classified as uninhabited colony sites, and 2) based on these ground observations, recording the proportion of each aerial intercept that intersects a colony and the proportion that intersects an uninhabited colony site. Where line-intercept techniques are applied to aerial surveys or remotely sensed imagery, this method can provide more accurate estimates of black-tailed prairie dog abundance and trends

  18. Generalized estimators of avian abundance from count survey data

    Directory of Open Access Journals (Sweden)

    Royle, J. A.

    2004-01-01

    Full Text Available I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture-recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.

  19. Adjusting forest density estimates for surveyor bias in historical tree surveys

    Science.gov (United States)

    Brice B. Hanberry; Jian Yang; John M. Kabrick; Hong S. He

    2012-01-01

    The U.S. General Land Office surveys, conducted between the late 1700s to early 1900s, provide records of trees prior to widespread European and American colonial settlement. However, potential and documented surveyor bias raises questions about the reliability of historical tree density estimates and other metrics based on density estimated from these records. In this...

  20. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  1. Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.

    Science.gov (United States)

    Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel

    2015-12-01

    Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).

  2. Spatial estimation of mean temperature and precipitation in areas of scarce meteorological information

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, J.D. [Universidad Autonoma Chapingo, Chapingo (Mexico)]. E-mail: dgomez@correo.chapingo.mx; Etchevers, J.D. [Instituto de Recursos Naturales, Colegio de Postgraduados, Montecillo, Edo. de Mexico (Mexico); Monterroso, A.I. [departamento de Suelos, Universidad Autonoma Chapingo, Chapingo (Mexico); Gay, G. [Centro de Ciencias de la Atmosfera, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Campo, J. [Instituto de Ecologia, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Martinez, M. [Instituto de Recursos Naturales, Montecillo, Edo. de Mexico (Mexico)

    2008-01-15

    In regions of complex relief and scarce meteorological information it becomes difficult to implement techniques and models of numerical interpolation to elaborate reliable maps of climatic variables essential for the study of natural resources using the new tools of the geographic information systems. This paper presents a method for estimating annual and monthly mean values of temperature and precipitation, taking elements from simple interpolation methods and complementing them with some characteristics of more sophisticated methods. To determine temperature, simple linear regression equations were generated associating temperature with altitude of weather stations in the study region, which had been previously subdivided in accordance with humidity conditions and then applying such equations to the area's digital elevation model to obtain temperatures. The estimation of precipitation was based on the graphic method through the analysis of the meteorological systems that affect the regions of the study area throughout the year and considering the influence of mountain ridges on the movement of prevailing winds. Weather stations with data in nearby regions were analyzed according to their position in the landscape, exposure to humid winds, and false color associated with vegetation types. Weather station sites were used to reference the amount of rainfall; interpolation was attained using analogies with satellite images of false color to which a model of digital elevation was incorporated to find similar conditions within the study area. [Spanish] En las regiones de relieve complejo y con escasa informacion meteorologica se dificulta la aplicacion de las diferentes tecnicas y modelos de interpolacion numericos para elaborar mapas de variables climaticas confiables, indispensables para realizar estudios de los recursos naturales, con la utilizacion de las nuevas herramientas de los sistemas de informacion geografica. En este trabajo se presenta un metodo para

  3. Local digital algorithms for estimating the mean integrated curvature of r-regular sets

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    , no asymptotically unbiased estimator of this type exists in dimension greater than or equal to three, while for stationary isotropic lattices, asymptotically unbiased estimators are plenty. Both results follow from a general formula that we state and prove, describing the asymptotic behavior of hit...

  4. Comparing cancer screening estimates: Behavioral Risk Factor Surveillance System and National Health Interview Survey.

    Science.gov (United States)

    Sauer, Ann Goding; Liu, Benmei; Siegel, Rebecca L; Jemal, Ahmedin; Fedewa, Stacey A

    2018-01-01

    Cancer screening prevalence from the Behavioral Risk Factor Surveillance System (BRFSS), designed to provide state-level estimates, and the National Health Interview Survey (NHIS), designed to provide national estimates, are used to measure progress in cancer control. A detailed description of the extent to which recent cancer screening estimates vary by key demographic characteristics has not been previously described. We examined national prevalence estimates for recommended breast, cervical, and colorectal cancer screening using data from the 2012 and 2014 BRFSS and the 2010 and 2013 NHIS. Treating the NHIS estimates as the reference, direct differences (DD) were calculated by subtracting NHIS estimates from BRFSS estimates. Relative differences were computed by dividing the DD by the NHIS estimates. Two-sample t-tests (2-tails), were performed to test for statistically significant differences. BRFSS screening estimates were higher than those from NHIS for breast (78.4% versus 72.5%; DD=5.9%, pNHIS, each survey has a unique and important role in providing information to track cancer screening utilization among various populations. Awareness of these differences and their potential causes is important when comparing the surveys and determining the best application for each data source. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Methods to estimate annual mean spring discharge to the Snake River between Milner Dam and King Hill, Idaho

    Science.gov (United States)

    Kjelstrom, L.C.

    1995-01-01

    Many individual springs and groups of springs discharge water from volcanic rocks that form the north canyon wall of the Snake River between Milner Dam and King Hill. Previous estimates of annual mean discharge from these springs have been used to understand the hydrology of the eastern part of the Snake River Plain. Four methods that were used in previous studies or developed to estimate annual mean discharge since 1902 were (1) water-budget analysis of the Snake River; (2) correlation of water-budget estimates with discharge from 10 index springs; (3) determination of the combined discharge from individual springs or groups of springs by using annual discharge measurements of 8 springs, gaging-station records of 4 springs and 3 sites on the Malad River, and regression equations developed from 5 of the measured springs; and (4) a single regression equation that correlates gaging-station records of 2 springs with historical water-budget estimates. Comparisons made among the four methods of estimating annual mean spring discharges from 1951 to 1959 and 1963 to 1980 indicated that differences were about equivalent to a measurement error of 2 to 3 percent. The method that best demonstrates the response of annual mean spring discharge to changes in ground-water recharge and discharge is method 3, which combines the measurements and regression estimates of discharge from individual springs.

  6. Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey

    Directory of Open Access Journals (Sweden)

    Abdelrahman Osman Elfaki

    2014-01-01

    Full Text Available Cost estimation is the most important preliminary process in any construction project. Therefore, construction cost estimation has the lion’s share of the research effort in construction management. In this paper, we have analysed and studied proposals for construction cost estimation for the last 10 years. To implement this survey, we have proposed and applied a methodology that consists of two parts. The first part concerns data collection, for which we have chosen special journals as sources for the surveyed proposals. The second part concerns the analysis of the proposals. To analyse each proposal, the following four questions have been set. Which intelligent technique is used? How have data been collected? How are the results validated? And which construction cost estimation factors have been used? From the results of this survey, two main contributions have been produced. The first contribution is the defining of the research gap in this area, which has not been fully covered by previous proposals of construction cost estimation. The second contribution of this survey is the proposal and highlighting of future directions for forthcoming proposals, aimed ultimately at finding the optimal construction cost estimation. Moreover, we consider the second part of our methodology as one of our contributions in this paper. This methodology has been proposed as a standard benchmark for construction cost estimation proposals.

  7. Professionalism: A Core Competency, but What Does it Mean? A Survey of Surgery Residents.

    Science.gov (United States)

    Dilday, Joshua C; Miller, Elizabeth A; Schmitt, Kyle; Davis, Brian; Davis, Kurt G

    2017-10-27

    Professionalism is 1 of the 6 core competencies of the Accreditation Council of Graduate Medical Education. Despite its obvious importance, it is poorly defined in the literature and an understanding of its meaning has not been evaluated on surgical trainees. The American College of Surgeons (ACS) has previously published tenets of surgical professionalism. However, surgery residents may not share similar views on professionalism as those of the ACS. Surgical residents of all levels at 2 surgery residencies located in the same city were interviewed regarding their personal definitions, thoughts, and experiences regarding professionalism during their training. They were then queried regarding 20 points of professionalism as outlined by the ACS tenets of professionalism. The study utilized the surgery residencies at William Beaumont Army Medical Center and Texas Tech University Health Science Center in El Paso, Texas. All general surgery residents at each program were invited to participate in the study. Eighteen residents volunteered to take the survey and be interviewed. The definitions of professionalism centered on clinical competence. Surgery residents conveyed experiences with both professional and unprofessional behavior. Seven of the 20 ACS tenets of professionalism were unanimously agreed upon. There were key differences between resident definitions and those as outlined by the ACS. The least agreed upon ACS tenets of professionalism include professionalism education, public education, and public health. Surgical trainees express personal experiences in both professional and unprofessional behavior. Their definitions of professionalism are not as expansive as those of the ACS and seem to focus on patient and colleague interaction. Due to the lack of congruency, a tailored curriculum for professionalism based upon ACS tenets appears warranted. Published by Elsevier Inc.

  8. The importance of the chosen technique to estimate diffuse solar radiation by means of regression

    Energy Technology Data Exchange (ETDEWEB)

    Arslan, Talha; Altyn Yavuz, Arzu [Department of Statistics. Science and Literature Faculty. Eskisehir Osmangazi University (Turkey)], email: mtarslan@ogu.edu.tr, email: aaltin@ogu.edu.tr; Acikkalp, Emin [Department of Mechanical and Manufacturing Engineering. Engineering Faculty. Bilecik University (Turkey)], email: acikkalp@gmail.com

    2011-07-01

    The Ordinary Least Squares (OLS) method is one of the most frequently used for estimation of diffuse solar radiation. The data set must provide certain assumptions for the OLS method to work. The most important is that the regression equation offered by OLS error terms must fit within the normal distribution. Utilizing an alternative robust estimator to get parameter estimations is highly effective in solving problems where there is a lack of normal distribution due to the presence of outliers or some other factor. The purpose of this study is to investigate the value of the chosen technique for the estimation of diffuse radiation. This study described alternative robust methods frequently used in applications and compared them with the OLS method. Making a comparison of the data set analysis of the OLS and that of the M Regression (Huber, Andrews and Tukey) techniques, it was study found that robust regression techniques are preferable to OLS because of the smoother explanation values.

  9. Enhanced Mean Dynamic Topography And Ocean Circulation Estimation Using Goce Preliminary Mode

    DEFF Research Database (Denmark)

    Knudsen, Per; Bingham, Rory; Andersen, Ole Baltazar

    2011-01-01

    have been combined with the recent DNSC08MSS mean sea surface model to construct a global GOCE satellite-only mean dynamic topography model. At a first glance, the GOCE MDT display the well known features related to the major ocean current systems. A closer look, however, reveals that the improved...

  10. Constrained parameter estimation for semi-supervised learning : The case of the nearest mean classifier

    NARCIS (Netherlands)

    Loog, M.

    2011-01-01

    A rather simple semi-supervised version of the equally simple nearest mean classifier is presented. However simple, the proposed approach is of practical interest as the nearest mean classifier remains a relevant tool in biomedical applications or other areas dealing with relatively high-dimensional

  11. Selection enhanced estimates of marker effects on means and variances of beef tenderness

    Science.gov (United States)

    Genetic marker associations from surveys of industry cattle populations have low frequencies of rare homozygous animals. Selection for calpain (CAPN1) and calpastatin (CAST) genetic markers was replicated in two cattle populations (Angus and MARC III) at the U.S. Meat Animal Research Center. These...

  12. Estimation of undernutrition and mean calorie intake in Africa: methodology, findings and implications for Africa's record

    NARCIS (Netherlands)

    van Wesenbeeck, C.F.A.; Keyzer, M.A.; Nube, M.

    2009-01-01

    Background: As poverty and hunger are basic yardsticks of underdevelopment and destitution, the need for reliable statistics in this domain is self-evident. While the measurement of poverty through surveys is relatively well documented in the literature, for hunger, information is much scarcer,

  13. Should total landings be used to correct estimated catch in numbers or mean-weight-at-age?

    DEFF Research Database (Denmark)

    Lewy, Peter; Lassen, H.

    1997-01-01

    Many ICES fish stock assessment working groups have practised Sum Of Products, SOP, correction. This correction stems from a comparison of total weights of the known landings and the SOP over age of catch in number and mean weight-at-age, which ideally should be identical. In case of SOP...... discrepancies some countries correct catch in numbers while others correct mean weight-at-age by a common factor, the ratio between landing and SOP. The paper shows that for three sampling schemes the SOP corrections are statistically incorrect and should not be made since the SOP is an unbiased estimate...... of the total landings. Calculation of the bias of estimated catch in numbers and mean weight-at-age shows that SOP corrections of either of these estimates may increase the bias. Furthermore, for five demersal and one pelagic North Sea species it is shown that SOP discrepancies greater than 2% from...

  14. Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections.

    Directory of Open Access Journals (Sweden)

    Jason T Fisher

    Full Text Available Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT. Grizzly bears (Ursus arctos, for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears' range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error-arising when a visiting bear fails to leave a hair sample-has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012 and 76 (2013 sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation-which form the crux of management plans-require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species

  15. Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections.

    Science.gov (United States)

    Fisher, Jason T; Heim, Nicole; Code, Sandra; Paczkowski, John

    2016-01-01

    Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT). Grizzly bears (Ursus arctos), for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears' range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error-arising when a visiting bear fails to leave a hair sample-has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012) and 76 (2013) sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation-which form the crux of management plans-require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species management and

  16. Estimating mean long-term hydrologic budget components for watersheds and counties: An application to the commonwealth of Virginia, USA

    Science.gov (United States)

    Sanford, Ward E.; Nelms, David L.; Pope, Jason P.; Selnick, David L.

    2015-01-01

    Mean long-term hydrologic budget components, such as recharge and base flow, are often difficult to estimate because they can vary substantially in space and time. Mean long-term fluxes were calculated in this study for precipitation, surface runoff, infiltration, total evapotranspiration (ET), riparian ET, recharge, base flow (or groundwater discharge) and net total outflow using long-term estimates of mean ET and precipitation and the assumption that the relative change in storage over that 30-year period is small compared to the total ET or precipitation. Fluxes of these components were first estimated on a number of real-time-gaged watersheds across Virginia. Specific conductance was used to distinguish and separate surface runoff from base flow. Specific-conductance (SC) data were collected every 15 minutes at 75 real-time gages for approximately 18 months between March 2007 and August 2008. Precipitation was estimated for 1971-2000 using PRISM climate data. Precipitation and temperature from the PRISM data were used to develop a regression-based relation to estimate total ET. The proportion of watershed precipitation that becomes surface runoff was related to physiographic province and rock type in a runoff regression equation. A new approach to estimate riparian ET using seasonal SC data gave results consistent with those from other methods. Component flux estimates from the watersheds were transferred to flux estimates for counties and independent cities using the ET and runoff regression equations. Only 48 of the 75 watersheds yielded sufficient data, and data from these 48 were used in the final runoff regression equation. Final results for the study are presented as component flux estimates for all counties and independent cities in Virginia. The method has the potential to be applied in many other states in the U.S. or in other regions or countries of the world where climate and stream flow data are plentiful.

  17. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  18. On the generalization of linear least mean squares estimation to quantum systems with non-commutative outputs

    Energy Technology Data Exchange (ETDEWEB)

    Amini, Nina H. [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); CNRS, Laboratoire des Signaux et Systemes (L2S) CentraleSupelec, Gif-sur-Yvette (France); Miao, Zibo; Pan, Yu; James, Matthew R. [Australian National University, ARC Centre for Quantum Computation and Communication Technology, Research School of Engineering, Canberra, ACT (Australia); Mabuchi, Hideo [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States)

    2015-12-15

    The purpose of this paper is to study the problem of generalizing the Belavkin-Kalman filter to the case where the classical measurement signal is replaced by a fully quantum non-commutative output signal. We formulate a least mean squares estimation problem that involves a non-commutative system as the filter processing the non-commutative output signal. We solve this estimation problem within the framework of non-commutative probability. Also, we find the necessary and sufficient conditions which make these non-commutative estimators physically realizable. These conditions are restrictive in practice. (orig.)

  19. Statistical properties of the anomalous scaling exponent estimator based on time-averaged mean-square displacement

    Science.gov (United States)

    Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis

    2017-08-01

    The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.

  20. Comparison of in vivo and in sacco methods to estimate mean ...

    African Journals Online (AJOL)

    This method has some advantages, the most obvious being ... for the present feeding regime. Pilot experiments .... These values are very close to the observed value obtained at 8,52 hand. 16,12 h using the in vivo method on the same two diets. Thus, when an estimate of the outflow of fermentable OM is included, the ...

  1. Variance estimates for transport in stochastic media by means of the master equation

    International Nuclear Information System (INIS)

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.

    2013-01-01

    The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models. (authors)

  2. On the precision of an estimator of Mean for Domains in Double ...

    African Journals Online (AJOL)

    The results show that there is a positive contribution to the variance of the estimator which varies from one stratum to another. This addition vanishes where the domain coincides with a stratum. The total sampling variance depends only on components of variance for the domain and is inversely related to the total sample ...

  3. Using cost-effectiveness estimates from survey data to guide commissioning: an application to home care.

    Science.gov (United States)

    Forder, Julien; Malley, Juliette; Towers, Ann-Marie; Netten, Ann

    2014-08-01

    The aim is to describe and trial a pragmatic method to produce estimates of the incremental cost-effectiveness of care services from survey data. The main challenge is in estimating the counterfactual; that is, what the patient's quality of life would be if they did not receive that level of service. A production function method is presented, which seeks to distinguish the variation in care-related quality of life in the data that is due to service use as opposed to other factors. A problem is that relevant need factors also affect the amount of service used and therefore any missing factors could create endogeneity bias. Instrumental variable estimation can mitigate this problem. This method was applied to a survey of older people using home care as a proof of concept. In the analysis, we were able to estimate a quality-of-life production function using survey data with the expected form and robust estimation diagnostics. The practical advantages with this method are clear, but there are limitations. It is computationally complex, and there is a risk of misspecification and biased results, particularly with IV estimation. One strategy would be to use this method to produce preliminary estimates, with a full trial conducted thereafter, if indicated. Copyright © 2013 John Wiley & Sons, Ltd.

  4. On the real-time estimation of the wheel-rail contact force by means of a new nonlinear estimator design model

    Science.gov (United States)

    Strano, Salvatore; Terzo, Mario

    2018-05-01

    The dynamics of the railway vehicles is strongly influenced by the interaction between the wheel and the rail. This kind of contact is affected by several conditioning factors such as vehicle speed, wear, adhesion level and, moreover, it is nonlinear. As a consequence, the modelling and the observation of this kind of phenomenon are complex tasks but, at the same time, they constitute a fundamental step for the estimation of the adhesion level or for the vehicle condition monitoring. This paper presents a novel technique for the real time estimation of the wheel-rail contact forces based on an estimator design model that takes into account the nonlinearities of the interaction by means of a fitting model functional to reproduce the contact mechanics in a wide range of slip and to be easily integrated in a complete model based estimator for railway vehicle.

  5. Approximating the variance of estimated means for systematic random sampling, illustrated with data of the French Soil Monitoring Network

    NARCIS (Netherlands)

    Brus, D.J.; Saby, N.P.A.

    2016-01-01

    In France like in many other countries, the soil is monitored at the locations of a regular, square grid thus forming a systematic sample (SY). This sampling design leads to good spatial coverage, enhancing the precision of design-based estimates of spatial means and totals. Design-based

  6. Estimating space-time mean concentrations of nutrients in surface waters of variable depth

    NARCIS (Netherlands)

    Knotters, M.; Brus, D.J.

    2010-01-01

    A monitoring scheme has been designed to test whether the space-time mean concentration total Nitrogen (N-total) in the surface water in the Northern Frisian Woodlands (NFW, The Netherlands) complies with standards of the European Water Framework directive. Since in statistical testing for

  7. Estimating the North Atlantic mean dynamic topography and geostrophic currents with GOCE

    DEFF Research Database (Denmark)

    Bingham, Rory J.; Knudsen, Per; Andersen, Ole Baltazar

    2011-01-01

    Three GOCE gravity models were released in July 2010 based on two months of observations. Subsequently, two second generation models, based on 8 months of observations, were released in March 2011. This paper compares these five models in terms of the mean North Atlantic circulation that can be d...

  8. Low Complexity Sparse Bayesian Learning for Channel Estimation Using Generalized Mean Field

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2014-01-01

    We derive low complexity versions of a wide range of algorithms for sparse Bayesian learning (SBL) in underdetermined linear systems. The proposed algorithms are obtained by applying the generalized mean field (GMF) inference framework to a generic SBL probabilistic model. In the GMF framework, we...

  9. Abrupt change in mean using block bootstrap and avoiding variance estimation

    Czech Academy of Sciences Publication Activity Database

    Peštová, Barbora; Pešta, M.

    2018-01-01

    Roč. 33, č. 1 (2018), s. 413-441 ISSN 0943-4062 Grant - others:GA ČR(CZ) GJ15-04774Y Institutional support: RVO:67985807 Keywords : Block bootstrap * Change in mean * Change point * Hypothesis test ing * Ratio type statistics * Robustness Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.434, year: 2016

  10. Nitrogen Dynamics in the Westerschelde Estuary (Sw Netherlands) Estimated by Means of the Ecosystem Model Moses

    NARCIS (Netherlands)

    Soetaert, K.E.R.; Herman, P.M.J.

    1995-01-01

    A tentative nitrogen budget for the Westerschelde (SW Netherlands) is constructed by means of a simulation model with thirteen spatial compartments. Biochemical and chemical processes in the water column are dynamically modeled; fluxes of dissolved constituents across the water-bottom interface are

  11. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  12. Expected shortfall estimation for apparently infinite-mean models of operational risk

    NARCIS (Netherlands)

    Cirillo, P.; Taleb, Nassim Nicholas

    2016-01-01

    Statistical analyses on actual data depict operational risk as an extremely heavy-tailed phenomenon, able to generate losses so extreme as to suggest the use of infinite-mean models. But no loss can actually destroy more than the entire value of a bank or of a company, and this upper bound should be

  13. Effect of co-operative fuzzy c-means clustering on estimates of three ...

    Indian Academy of Sciences (India)

    infinite isotropic elastic media in concise matrix ... hydrate and free gas accumulation. 2. AVA method ... wave propagation across the boundaries of hori- zontally .... Flow chart showing the sequence of steps in the present scheme of fuzzy c-mean clustering adapted for AVA ... porosity 0.38, OIL API 28.5, brine salinity 0.07, ...

  14. Estimating factors influencing the detection probability of semiaquatic freshwater snails using quadrat survey methods

    Science.gov (United States)

    Roesler, Elizabeth L.; Grabowski, Timothy B.

    2018-01-01

    Developing effective monitoring methods for elusive, rare, or patchily distributed species requires extra considerations, such as imperfect detection. Although detection is frequently modeled, the opportunity to assess it empirically is rare, particularly for imperiled species. We used Pecos assiminea (Assiminea pecos), an endangered semiaquatic snail, as a case study to test detection and accuracy issues surrounding quadrat searches. Quadrats (9 × 20 cm; n = 12) were placed in suitable Pecos assiminea habitat and randomly assigned a treatment, defined as the number of empty snail shells (0, 3, 6, or 9). Ten observers rotated through each quadrat, conducting 5-min visual searches for shells. The probability of detecting a shell when present was 67.4 ± 3.0%, but it decreased with the increasing litter depth and fewer number of shells present. The mean (± SE) observer accuracy was 25.5 ± 4.3%. Accuracy was positively correlated to the number of shells in the quadrat and negatively correlated to the number of times a quadrat was searched. The results indicate quadrat surveys likely underrepresent true abundance, but accurately determine the presence or absence. Understanding detection and accuracy of elusive, rare, or imperiled species improves density estimates and aids in monitoring and conservation efforts.

  15. Errors of Mean Dynamic Topography and Geostrophic Current Estimates in China's Marginal Seas from GOCE and Satellite Altimetry

    DEFF Research Database (Denmark)

    Jin, Shuanggen; Feng, Guiping; Andersen, Ole Baltazar

    2014-01-01

    and geostrophic current estimates from satellite gravimetry and altimetry are investigated and evaluated in China's marginal seas. The cumulative error in MDT from GOCE is reduced from 22.75 to 9.89 cm when compared to the Gravity Recovery and Climate Experiment (GRACE) gravity field model ITG-Grace2010 results......The Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) and satellite altimetry can provide very detailed and accurate estimates of the mean dynamic topography (MDT) and geostrophic currents in China's marginal seas, such as, the newest high-resolution GOCE gravity field model GO......-CONS-GCF-2-TIM-R4 and the new Centre National d'Etudes Spatiales mean sea surface model MSS_CNES_CLS_11 from satellite altimetry. However, errors and uncertainties of MDT and geostrophic current estimates from satellite observations are not generally quantified. In this paper, errors and uncertainties of MDT...

  16. A Theoretically Consistent Method for Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2014-01-01

    We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...

  17. The association of estimated salt intake with blood pressure in a Viet Nam national survey.

    Directory of Open Access Journals (Sweden)

    Paul N Jensen

    Full Text Available To evaluate the association of salt consumption with blood pressure in Viet Nam, a developing country with a high level of salt consumption.Analysis of a nationally representative sample of Vietnamese adults 25-65 years of age who were surveyed using the World Health Organization STEPwise approach to Surveillance protocol. Participants who reported acute illness, pregnancy, or current use of antihypertensive medications were excluded. Daily salt consumption was estimated from fasting mid-morning spot urine samples. Associations of salt consumption with systolic blood pressure and prevalent hypertension were assessed using adjusted linear and generalized linear models. Interaction terms were tested to assess differences by age, smoking, alcohol consumption, and rural/urban status.The analysis included 2,333 participants (mean age: 37 years, 46% male, 33% urban. The average estimated salt consumption was 10g/day. No associations of salt consumption with blood pressure or prevalent hypertension were observed at a national scale in men or women. The associations did not differ in subgroups defined by age, smoking, or alcohol consumption; however, associations differed between urban and rural participants (p-value for interaction of urban/rural status with salt consumption, p = 0.02, suggesting that higher salt consumption may be associated with higher systolic blood pressure in urban residents but lower systolic blood pressure in rural residents.Although there was no evidence of an association at a national level, associations of salt consumption with blood pressure differed between urban and rural residents in Viet Nam. The reasons for this differential association are not clear, and given the large rate of rural to urban migration experienced in Viet Nam, this topic warrants further investigation.

  18. The association of estimated salt intake with blood pressure in a Viet Nam national survey.

    Science.gov (United States)

    Jensen, Paul N; Bao, Tran Quoc; Huong, Tran Thi Thanh; Heckbert, Susan R; Fitzpatrick, Annette L; LoGerfo, James P; Ngoc, Truong Le Van; Mokdad, Ali H

    2018-01-01

    To evaluate the association of salt consumption with blood pressure in Viet Nam, a developing country with a high level of salt consumption. Analysis of a nationally representative sample of Vietnamese adults 25-65 years of age who were surveyed using the World Health Organization STEPwise approach to Surveillance protocol. Participants who reported acute illness, pregnancy, or current use of antihypertensive medications were excluded. Daily salt consumption was estimated from fasting mid-morning spot urine samples. Associations of salt consumption with systolic blood pressure and prevalent hypertension were assessed using adjusted linear and generalized linear models. Interaction terms were tested to assess differences by age, smoking, alcohol consumption, and rural/urban status. The analysis included 2,333 participants (mean age: 37 years, 46% male, 33% urban). The average estimated salt consumption was 10g/day. No associations of salt consumption with blood pressure or prevalent hypertension were observed at a national scale in men or women. The associations did not differ in subgroups defined by age, smoking, or alcohol consumption; however, associations differed between urban and rural participants (p-value for interaction of urban/rural status with salt consumption, p = 0.02), suggesting that higher salt consumption may be associated with higher systolic blood pressure in urban residents but lower systolic blood pressure in rural residents. Although there was no evidence of an association at a national level, associations of salt consumption with blood pressure differed between urban and rural residents in Viet Nam. The reasons for this differential association are not clear, and given the large rate of rural to urban migration experienced in Viet Nam, this topic warrants further investigation.

  19. Estimation of breeding values for mean and dispersion, their variance and correlation using double hierarchical generalized linear models.

    Science.gov (United States)

    Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L

    2012-12-01

    The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).

  20. Fatigue life estimation of welded components considering welding residual stress relaxation and its mean stress effect

    International Nuclear Information System (INIS)

    Han, Seung Ho; Han, Jeong Woo; Shin, Byung Chun; Kim, Jae Hoon

    2003-01-01

    The fatigue life of welded joints is sensitive to welding residual stress and complexity of their geometric shapes. To predict the fatigue life more reasonably, the effects of welding residual stress and its relaxation on their fatigue strengths should be considered quantitatively, which are often regarded to be equivalent to the effects of mean stresses by external loads. The hot-spot stress concept should be also adopted which can reduce the dependence of fatigue strengths for various welding details. Considering the factors mentioned above, a fatigue life prediction model using the modified Goodman's diagram was proposed. In this model, an equivalent stress was introduced which is composed of the mean stress based on the hot-spot stress concept and the relaxed welding residual stress. From the verification of the proposed model to real welding details, it is proved that this model can be applied to predict reasonably their fatigue lives

  1. A method of estimating hydrogen in solid and liquid samples by means of neutron thermalisation

    International Nuclear Information System (INIS)

    Carter, D.H.; Sanders, J.E.

    1967-06-01

    The count-rate of a cadmium-covered Pu239 fission chamber placed in a reactor neutron flux increases when a hydrogen-containing material is inserted due to the thermalisation of epicadmium neutrons. This effect forms the basis of a non-destructive method of estimating hydrogen in solid or liquid samples, and trial experiments to demonstrate the principles have been made. The sensitivity is such that hydrogen down to 10 p.p.m. in a typical metal should be detected. A useful feature of the method is its very low response to elements other than hydrogen. (author)

  2. Mean consumption, poverty and inequality in rural India in the 60th round of the National Sample Survey.

    Science.gov (United States)

    Jha, Raghbendra; Gaiha, Raghav; Sharma, Anurag

    2010-01-01

    This article reports on mean consumption, poverty (all three FGT measures) and inequality during 2004 for rural India using National Sample Survey (NSS) data for the 60th Round. Mean consumption at the national level is much higher than the poverty line. However, the Gini coefficient is higher than in recent earlier rounds. The headcount ratio is 22.9 per cent. Mean consumption, all three measures of poverty and the Gini coefficient are computed at the level of 20 states and 63 agro-climatic zones in these 20 states. It is surmised that despite impressive growth rates deprivation is pervasive, pockets of severe poverty persist, and inequality is rampant.

  3. Improving estimates of numbers of children with severe acute malnutrition using cohort and survey data

    DEFF Research Database (Denmark)

    Isanaka, Sheila; Boundy, Ellen O neal; Grais, Rebecca F

    2016-01-01

    Severe acute malnutrition (SAM) is reported to affect 19 million children worldwide. However, this estimate is based on prevalence data from cross-sectional surveys and can be expected to miss some children affected by an acute condition such as SAM. The burden of acute conditions is more...

  4. Improved sampling for airborne surveys to estimate wildlife population parameters in the African Savannah

    NARCIS (Netherlands)

    Khaemba, W.; Stein, A.

    2002-01-01

    Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like

  5. Can i just check...? Effects of edit check questions on measurement error and survey estimates

    NARCIS (Netherlands)

    Lugtig, Peter; Jäckle, Annette

    2014-01-01

    Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to

  6. Air kerma rate estimation by means of in-situ gamma spectrometry: A Bayesian approach

    International Nuclear Information System (INIS)

    Cabal, Gonzalo; Kluson, Jaroslav

    2008-01-01

    Full text: Bayesian inference is used to determine the Air Kerma Rate based on a set of in situ environmental gamma spectra measurements performed with a NaI(Tl) scintillation detector. A natural advantage of such approach is the possibility to quantify uncertainty not only in the Air Kerma Rate estimation but also for the gamma spectra which is unfolded within the procedure. The measurements were performed using a 3'' x 3'' NaI(Tl) scintillation detector. The response matrices of such detection system were calculated using a Monte Carlo code. For the calculations of the spectra as well as the Air Kerma Rate the WinBugs program was used. WinBugs is a dedicated software for Bayesian inference using Monte Carlo Markov chain methods (MCMC). The results of such calculations are shown and compared with other non-Bayesian approachs such as the Scofield-Gold iterative method and the Maximum Entropy Method

  7. Quality control, mean glandular dose estimate and room shielding calculation in mammography

    International Nuclear Information System (INIS)

    Rakotomalala, H.M.

    2014-01-01

    This study focuses in the importance of Radiation Protection in mammography. A good control of the radiological risk depends on the dose optimization, room shielding calculation and the quality of equipment. The work was carried out in the three private medical centers called A, B, and C. Dosimetry estimates were made on the equipment of the three centers. Values has been compared with the Diagnostic Reference Levels established by the International Atomic Energy Agency (IAEA). Conformity control of the radiological devices has also been done with the Mammographic Quality Control Kit of the INSTN-Madagascar. Verifications of shields of the room containing the mammography equipment were done by theoretical calculations using the method provided by NCRP 147. [fr

  8. A quantitative model for estimating mean annual soil loss in cultivated land using 137Cs measurements

    International Nuclear Information System (INIS)

    Yang Hao; Zhao Qiguo; Du Mingyuan; Minami, Katsuyuki; Hatta, Tamao

    2000-01-01

    The radioisotope 137 Cs has been widely used to determine rates of cultivated soil loss, Many calibration relationships (including both empirical relationships and theoretical models) have been employed to estimate erosion rates from the amount of 137 Cs lost from the cultivated soil profile. However, there are important limitations which restrict the reliability of these models, which consider only the uniform distribution of 137 Cs in the plough layer and the depth. As a result, erosion rates they may be overestimated or underestimated. This article presents a quantitative model for the relation the amount of 137 Cs lost from the cultivate soil profile and the rate of soil erosion. According to a mass balance model, during the construction of this model we considered the following parameters: the remaining fraction of the surface enrichment layer (F R ), the thickness of the surface enrichment layer (H s ), the depth of the plough layer (H p ), input fraction of the total 137 Cs fallout deposition during a given year t (F t ), radioactive decay of 137 Cs (k), and sampling year (t). The simulation results showed that the amounts of erosion rates estimated using this model were very sensitive to changes in the values of the parameters F R , H s , and H p . We also observed that the relationship between the rate of soil loss and 137 Cs depletion is neither linear nor logarithmic, and is very complex. Although the model is an improvement over existing approaches to derive calibration relationships for cultivated soil, it requires empirical information on local soil properties and the behavior of 137 Cs in the soil profile. There is clearly still a need for more precise information on the latter aspect and, in particular, on the retention of 137 Cs fallout in the top few millimeters of the soil profile and on the enrichment and depletion effects associated with soil redistribution (i.e. for determining accurate values of F R and H s ). (author)

  9. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    Science.gov (United States)

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Comparison of NIS and NHIS/NIPRCS vaccination coverage estimates. National Immunization Survey. National Health Interview Survey/National Immunization Provider Record Check Study.

    Science.gov (United States)

    Bartlett, D L; Ezzati-Rice, T M; Stokley, S; Zhao, Z

    2001-05-01

    The National Immunization Survey (NIS) and the National Health Interview Survey (NHIS) produce national coverage estimates for children aged 19 months to 35 months. The NIS is a cost-effective, random-digit-dialing telephone survey that produces national and state-level vaccination coverage estimates. The National Immunization Provider Record Check Study (NIPRCS) is conducted in conjunction with the annual NHIS, which is a face-to-face household survey. As the NIS is a telephone survey, potential coverage bias exists as the survey excludes children living in nontelephone households. To assess the validity of estimates of vaccine coverage from the NIS, we compared 1995 and 1996 NIS national estimates with results from the NHIS/NIPRCS for the same years. Both the NIS and the NHIS/NIPRCS produce similar results. The NHIS/NIPRCS supports the findings of the NIS.

  11. Simple method to estimate mean heart dose from Hodgkin lymphoma radiation therapy according to simulation X-rays.

    Science.gov (United States)

    van Nimwegen, Frederika A; Cutter, David J; Schaapveld, Michael; Rutten, Annemarieke; Kooijman, Karen; Krol, Augustinus D G; Janus, Cécile P M; Darby, Sarah C; van Leeuwen, Flora E; Aleman, Berthe M P

    2015-05-01

    To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case-control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor-intensive representative CT-based method. This simpler method may produce a

  12. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    Energy Technology Data Exchange (ETDEWEB)

    Nimwegen, Frederika A. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Cutter, David J. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford (United Kingdom); Schaapveld, Michael [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Rutten, Annemarieke [Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Kooijman, Karen [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Krol, Augustinus D.G. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Janus, Cécile P.M. [Department of Radiation Oncology, Erasmus MC Cancer Center, Rotterdam (Netherlands); Darby, Sarah C. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Leeuwen, Flora E. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Aleman, Berthe M.P., E-mail: b.aleman@nki.nl [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2015-05-01

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor

  13. Water quality of storm runoff and comparison of procedures for estimating storm-runoff loads, volume, event-mean concentrations, and the mean load for a storm for selected properties and constituents for Colorado Springs, southeastern Colorado, 1992

    Science.gov (United States)

    Von Guerard, Paul; Weiss, W.B.

    1995-01-01

    The U.S. Environmental Protection Agency requires that municipalities that have a population of 100,000 or greater obtain National Pollutant Discharge Elimination System permits to characterize the quality of their storm runoff. In 1992, the U.S. Geological Survey, in cooperation with the Colorado Springs City Engineering Division, began a study to characterize the water quality of storm runoff and to evaluate procedures for the estimation of storm-runoff loads, volume and event-mean concentrations for selected properties and constituents. Precipitation, streamflow, and water-quality data were collected during 1992 at five sites in Colorado Springs. Thirty-five samples were collected, seven at each of the five sites. At each site, three samples were collected for permitting purposes; two of the samples were collected during rainfall runoff, and one sample was collected during snowmelt runoff. Four additional samples were collected at each site to obtain a large enough sample size to estimate storm-runoff loads, volume, and event-mean concentrations for selected properties and constituents using linear-regression procedures developed using data from the Nationwide Urban Runoff Program (NURP). Storm-water samples were analyzed for as many as 186 properties and constituents. The constituents measured include total-recoverable metals, vola-tile-organic compounds, acid-base/neutral organic compounds, and pesticides. Storm runoff sampled had large concentrations of chemical oxygen demand and 5-day biochemical oxygen demand. Chemical oxygen demand ranged from 100 to 830 milligrams per liter, and 5.-day biochemical oxygen demand ranged from 14 to 260 milligrams per liter. Total-organic carbon concentrations ranged from 18 to 240 milligrams per liter. The total-recoverable metals lead and zinc had the largest concentrations of the total-recoverable metals analyzed. Concentrations of lead ranged from 23 to 350 micrograms per liter, and concentrations of zinc ranged from 110

  14. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

    Science.gov (United States)

    Brand, Andrew; Bradley, Michael T

    2016-02-01

    Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

  15. On the estimation of the steam generator maintenance efficiency by the means of probabilistic fracture mechanics

    International Nuclear Information System (INIS)

    Cizelj, L.

    1994-10-01

    In this report, an original probabilistic model aimed to assess the efficiency of particular maintenance strategy in terms of tube failure probability is proposed. The model concentrates on axial through wall cracks in the residual stress dominated tube expansion transition zone. It is based on the recent developments in probabilistic fracture mechanics and accounts for scatter in material, geometry and crack propagation data. Special attention has been paid to model the uncertainties connected to non-destructive examination technique (e.g., measurement errors, non-detection probability). First and second order reliability methods (FORM and SORM) have been implemented to calculate the failure probabilities. This is the first time that those methods are applied to the reliability analysis of components containing stress-corrosion cracks. In order to predict the time development of the tube failure probabilities, an original linear elastic fracture mechanics based crack propagation model has been developed. It accounts for the residual and operating stresses together. Also, the model accounts for scatter in residual and operational stresses due to the random variations in tube geometry and material data. Due to the lack of reliable crack velocity vs load data, the non-destructive examination records of the crack propagation have been employed to estimate the velocities at the crack tips. (orig./GL) [de

  16. Trial latencies estimation of event-related potentials in EEG by means of genetic algorithms

    Science.gov (United States)

    Da Pelo, P.; De Tommaso, M.; Monaco, A.; Stramaglia, S.; Bellotti, R.; Tangaro, S.

    2018-04-01

    Objective. Event-related potentials (ERPs) are usually obtained by averaging thus neglecting the trial-to-trial latency variability in cognitive electroencephalography (EEG) responses. As a consequence the shape and the peak amplitude of the averaged ERP are smeared and reduced, respectively, when the single-trial latencies show a relevant variability. To date, the majority of the methodologies for single-trial latencies inference are iterative schemes providing suboptimal solutions, the most commonly used being the Woody’s algorithm. Approach. In this study, a global approach is developed by introducing a fitness function whose global maximum corresponds to the set of latencies which renders the trial signals most aligned as possible. A suitable genetic algorithm has been implemented to solve the optimization problem, characterized by new genetic operators tailored to the present problem. Main results. The results, on simulated trials, showed that the proposed algorithm performs better than Woody’s algorithm in all conditions, at the cost of an increased computational complexity (justified by the improved quality of the solution). Application of the proposed approach on real data trials, resulted in an increased correlation between latencies and reaction times w.r.t. the output from RIDE method. Significance. The above mentioned results on simulated and real data indicate that the proposed method, providing a better estimate of single-trial latencies, will open the way to more accurate study of neural responses as well as to the issue of relating the variability of latencies to the proper cognitive and behavioural correlates.

  17. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range

    OpenAIRE

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-01-01

    Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...

  18. Estimating the Impact of Means-tested Subsidies under Treatment Externalities with Application to Anti-Malarial Bednets

    DEFF Research Database (Denmark)

    Bhattacharya, Debopam; Dupas, Pascaline; Kanaya, Shin

    and its neighbors. Using experimental data from Kenya where subsidies were randomized, coupled with GPS-based location information, we show how to estimate aggregate ITN use resulting from means-tested subsidies in the presence of such spatial spillovers. Accounting for spillovers introduces infinite......-dimensional estimated regressors corresponding to continuously distributed location coordinates and makes the inference problem novel. We show that even if individual ITN use unambiguously increases with increasing incidence of subsidy in the neighborhood, ignoring spillovers may over- or under-predict overall ITN use...... resulting from a specific targeting rule, depending on the resulting aggregate incidence of subsidy. Applying our method to the Kenyan data, we find that (i) individual ITN use rises with neighborhood subsidy-rates, (ii) under means-testing, predicted ITN use is a convex increasing function of the subsidy...

  19. Estimate of the global-scale joule heating rates in the thermosphere due to time mean currents

    International Nuclear Information System (INIS)

    Roble, R.G.; Matsushita, S.

    1975-01-01

    An estimate of the global-scale joule heating rates in the thermosphere is made based on derived global equivalent overhead electric current systems in the dynamo region during geomagnetically quiet and disturbed periods. The equivalent total electric field distribution is calculated from Ohm's law. The global-scale joule heating rates are calculated for various monthly average periods in 1965. The calculated joule heating rates maximize at high latitudes in the early evening and postmidnight sectors. During geomagnetically quiet times the daytime joule heating rates are considerably lower than heating by solar EUV radiation. However, during geomagnetically disturbed periods the estimated joule heating rates increase by an order of magnitude and can locally exceed the solar EUV heating rates. The results show that joule heating is an important and at times the dominant energy source at high latitudes. However, the global mean joule heating rates calculated near solar minimum are generally small compared to the global mean solar EUV heating rates. (auth)

  20. Meaning in life in the Federal Republic of Germany: results of a representative survey with the Schedule for Meaning in Life Evaluation (SMiLE

    Directory of Open Access Journals (Sweden)

    Bausewein Claudia

    2007-11-01

    Full Text Available Abstract Background The construct "meaning-in-life" (MiL has recently raised the interest of clinicians working in psycho-oncology and end-of-life care and has become a topic of scientific investigation. Difficulties regarding the measurement of MiL are related to the various theoretical and conceptual approaches and its inter-individual variability. Therefore the "Schedule for Meaning in Life Evaluation" (SMiLE, an individualized instrument for the assessment of MiL, was developed. The aim of this study was to evaluate MiL in a representative sample of the German population. Methods In the SMiLE, the respondents first indicate a minimum of three and maximum of seven areas which provide meaning to their life before rating their current level of importance and satisfaction of each area. Indices of total weighting (IoW, range 20–100, total satisfaction (IoS, range 0–100, and total weighted satisfaction (IoWS, range 0–100 are calculated. Results In July 2005, 1,004 Germans were randomly selected and interviewed (inclusion rate, 85.3%. 3,521 areas of MiL were listed and assigned to 13 a-posteriori categories. The mean IoS was 81.9 ± 15.1, the mean IoW was 84.6 ± 11.9, and the mean IoWS was 82.9 ± 14.8. In youth (16–19 y/o, "friends" were most important for MiL, in young adulthood (20–29 y/o "partnership", in middle adulthood (30–39 y/o "work", during retirement (60–69 y/o "health" and "altruism", and in advanced age (70 y/o and more "spirituality/religion" and "nature experience/animals". Conclusion This study is a first nationwide survey on individual MiL in a randomly selected, representative sample. The MiL areas of the age stages seem to correspond with Erikson's stages of psychosocial development.

  1. Estimation of the sauter mean diameter for biodiesels by the mixture topological index

    Energy Technology Data Exchange (ETDEWEB)

    Shu, Qing; Gao, Jixian; Liao, Yuhui; Wang, Dezheng; Wang, Jinfu [Beijing Key Laboratory of Green Chemical Reaction Engineering and Technology, Department of Chemical Engineering, Tsinghua University, Beijing 100084 (China)

    2011-02-15

    A pure component topological index was integrated with the modified Grunberg-Nissan or the modified Dalton-type mass-average equation to calculate the mean topological index {chi}{sub m,1} or {chi}{sub m,2} of five biodiesel fuels (peanut, canola, coconut, and palm, soybean oil). Then, the {chi}{sub m,1} or {chi}{sub m,2} was respectively related with the SMD values of biodiesel fuels (taken from the literature, at 313 K), and two regression equations were obtained. The regression equation derived from {chi}{sub m,1} has a higher predictive accuracy than the regression equation derived from {chi}{sub m,2}, and the deviations of these two regression equations were within 1.73% and 1.87%, respectively. Furthermore, a regression equation derived from the correlation of {chi}{sub m,1} and SMD was used to calculate the SMD values of biodiesel fuels (at 353 K), and the deviations were within 0.78%. Three types of hypothetical biodiesel fuels were investigated to know the effect of the molecular structure (carbon number and unsaturated bond) on the SMD. A suitable material for the preparation of a biodiesel having the comparable atomization with the diesel no.1 or no.2 will be composed of such components (low carbon number and more unsaturated bonds). (author)

  2. Assessing damage cost estimation of urban pluvial flood risk as a mean of improving climate change adaptations investments

    DEFF Research Database (Denmark)

    Skovgård Olsen, Anders; Zhou, Qianqian; Linde, Jens Jørgen

    Estimating the expected annual damage (EAD) due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how it will be affected by climate change and how much can be gained by implementing adaptation...... measures. This study investigates three different methods for estimating the EAD based on a loglinear relation between the damage costs and the return periods, one of which has been used in previous studies. The results show with the increased amount of data points there appears to be a shift in the log......-linear relation which could be contributed by the Danish design standards for drainage systems. Three different methods for estimating the EAD were tested and the choice of method is less important than accounting for the log-linear shift. This then also means that the statistical approximation of the EAD used...

  3. Statistical Estimators Using Jointly Administrative and Survey Data to Produce French Structural Business Statistics

    Directory of Open Access Journals (Sweden)

    Brion Philippe

    2015-12-01

    Full Text Available Using as much administrative data as possible is a general trend among most national statistical institutes. Different kinds of administrative sources, from tax authorities or other administrative bodies, are very helpful material in the production of business statistics. However, these sources often have to be completed by information collected through statistical surveys. This article describes the way Insee has implemented such a strategy in order to produce French structural business statistics. The originality of the French procedure is that administrative and survey variables are used jointly for the same enterprises, unlike the majority of multisource systems, in which the two kinds of sources generally complement each other for different categories of units. The idea is to use, as much as possible, the richness of the administrative sources combined with the timeliness of a survey, even if the latter is conducted only on a sample of enterprises. One main issue is the classification of enterprises within the NACE nomenclature, which is a cornerstone variable in producing the breakdown of the results by industry. At a given date, two values of the corresponding code may coexist: the value of the register, not necessarily up to date, and the value resulting from the data collected via the survey, but only from a sample of enterprises. Using all this information together requires the implementation of specific statistical estimators combining some properties of the difference estimators with calibration techniques. This article presents these estimators, as well as their statistical properties, and compares them with those of other methods.

  4. Forensic face recognition as a means to determine strength of evidence: A survey.

    Science.gov (United States)

    Zeinstra, C G; Meuwly, D; Ruifrok, A Cc; Veldhuis, R Nj; Spreeuwers, L J

    2018-01-01

    This paper surveys the literature on forensic face recognition (FFR), with a particular focus on the strength of evidence as used in a court of law. FFR is the use of biometric face recognition for several applications in forensic science. It includes scenarios of ID verification and open-set identification, investigation and intelligence, and evaluation of the strength of evidence. We present FFR from operational, tactical, and strategic perspectives. We discuss criticism of FFR and we provide an overview of research efforts from multiple perspectives that relate to the domain of FFR. Finally, we sketch possible future directions for FFR. Copyright © 2018 Central Police University.

  5. What does remediation and probation status mean? A survey of emergency medicine residency program directors.

    Science.gov (United States)

    Weizberg, Moshe; Smith, Jessica L; Murano, Tiffany; Silverberg, Mark; Santen, Sally A

    2015-01-01

    Emergency medicine (EM) residency program directors (PDs) nationwide place residents on remediation and probation. However, the Accreditation Council for Graduate Medical Education and the EM PDs have not defined these terms, and individual institutions must set guidelines defining a change in resident status from good standing to remediation or probation. The primary objective of this study was to determine if EM PDs follow a common process to guide actions when residents are placed on remediation and probation. An anonymous electronic survey was distributed to EM PDs via e-mail using SurveyMonkey to determine the current practice followed after residents are placed on remediation or probation. The survey queried four designations: informal remediation, formal remediation, informal probation, and formal probation. These designations were compared for deficits in the domains of medical knowledge (MK) and non-MK remediation. The survey asked what process for designation exists and what actions are triggered, specifically if documentation is placed in a resident's file, if the graduate medical education (GME) office is notified, if faculty are informed, or if resident privileges are limited. Descriptive data are reported. Eighty-one of 160 PDs responded. An official policy on remediation and/or probation was reported by 41 (50.6%) programs. The status of informal remediation is used by 73 (90.1%), 80 (98.8%) have formal remediation, 40 (49.4%) have informal probation, and 79 (97.5%) have formal probation. There was great variation among PDs in the management and definition of remediation and probation. Between 81 and 86% of programs place an official letter into the resident's file regarding formal remediation and probation. However, only about 50% notify the GME office when a resident is placed on formal remediation. There were no statistical differences between MK and non-MK remediation practices. There is significant variation among EM programs regarding the

  6. Estimation of fatality and injury risk by means of in-depth fatal accident investigation data.

    Science.gov (United States)

    Yannis, George; Papadimitriou, Eleonora; Dupont, Emmanuelle; Martensen, Heike

    2010-10-01

    In this article the factors affecting fatality and injury risk of road users involved in fatal accidents are analyzed by means of in-depth accident investigation data, with emphasis on parameters not extensively explored in previous research. A fatal accident investigation (FAI) database is used, which includes intermediate-level in-depth data for a harmonized representative sample of 1300 fatal accidents in 7 European countries. The FAI database offers improved potential for analysis, because it includes information on a number of variables that are seldom available, complete, or accurately recorded in road accident databases. However, the fact that only fatal accidents are examined requires for methodological adjustments, namely, the correction for two types of effects on a road user's baseline risk: "accident size" effects, and "relative vulnerability" effects. Fatality and injury risk can be then modeled through multilevel logistic regression models, which account for the hierarchical dependences of the road accident process. The results show that the baseline fatality risk of road users involved in fatal accidents decreases with accident size and increases with the vulnerability of the road user. On the contrary, accident size increases nonfatal injury risk of road users involved in fatal accidents. Other significant effects on fatality and injury risk in fatal accidents include road user age, vehicle type, speed limit, the chain of accident events, vehicle maneuver, and safety equipment. In particular, the presence and use of safety equipment such as seat belt, antilock braking system (ABS), and electronic stability program (ESP) are protection factors for car occupants, especially for those seated at the front seats. Although ABS and ESP systems are typically associated with positive effects on accident occurrence, the results of this research revealed significant related effects on accident severity as well. Moreover, accident consequences are more severe

  7. [Using an employee survey as a means of quality assurance in newborn hearing screening].

    Science.gov (United States)

    Depenbrock, A; Matulat, P; am Zehnhoff-Dinnesen, A

    2013-03-01

    Studies drawing information not only from technical data but also from surveying human resources behind the universal newborn hearing screening (UNHS) appear to be a rarity. This study aims at showing how the state of both knowledge and practical skills among the screening staff are essential aspects in future quality management. A self-developed questionnaire was sent to hospital staff addressing a total of 710 nurses who were registered as having undertaken a UNHS training course. Questions were aimed at aspects of organization, personal practical skills, current problems and improvement possibilities. High rates of occupancy, lack of trained personnel, technical issues and background noise disturbances were considered to be factors that increased time pressure and slowed down procedures. Of the participants 16 % considered communicating a "refer" result to parents a difficult step and 8 % felt insecure when explaining the aims and procedures to parents. There was a high interest in further training sessions. This survey served well to reveal aspects of improvement in screening procedures and meeting staff needs. The training sessions should outline practical aspects of conducting screening and also professional, sensitive communication to parents.

  8. Regional scale net radiation estimation by means of Landsat and TERRA/AQUA imagery and GIS modeling

    Science.gov (United States)

    Cristóbal, J.; Ninyerola, M.; Pons, X.; Llorens, P.; Poyatos, R.

    2009-04-01

    Net radiation (Rn) is one of the most important variables for the estimation of surface energy budget and is used for various applications including agricultural meteorology, climate monitoring and weather prediction. Moreover, net radiation is an essential input variable for potential as well as actual evapotranspiration modeling. Nowadays, radiometric measurements provided by Remote Sensing and GIS analysis are the technologies used to compute net radiation at regional scales in a feasible way. In this study we present a regional scale estimation of the daily Rn on clear days, (Catalonia, NE of the Iberian Peninsula), using a set of 22 Landsat images (17 Landsat-5 TM and 5 Landsat-7 ETM+) and 171 TERRA/AQUA images MODIS from 2000 to 2007 period. TERRA/AQUA MODIS images have been downloaded by means of the EOS Gateway. We have selected three different types of products which contain the remote sensing data we have used to model daily Rn: daily LST product, daily calibrated reflectances product and daily atmospheric water vapour product. Landsat-5 TM images have been corrected by means of conventional techniques based on first order polynomials taking into account the effect of land surface relief using a Digital Elevation Model, obtaining an RMS less than 30 m. Radiometric correction of Landsat non-thermal bands has been done following the methodology proposed by Pons and Solé (1994), which allows to reduce the number of undesired artifacts that are due to the effects of the atmosphere or to the differential illumination which is, in turn, due to the time of the day, the location in the Earth and the relief (zones being more illuminated than others, shadows, etc). Atmospheric correction of Landsat thermal band has been carried out by means of a single-channel algorithm improvement developed by Cristóbal et al. (2009) and the land surface emissivity computed by means of the methodology proposed by Sobrino and Raissouni (2000). Rn has been estimated through the

  9. Estimation of unsteady lift on a pitching airfoil from wake velocity surveys

    Science.gov (United States)

    Zaman, K. B. M. Q.; Panda, J.; Rumsey, C. L.

    1993-01-01

    The results of a joint experimental and computational study on the flowfield over a periodically pitched NACA0012 airfoil, and the resultant lift variation, are reported in this paper. The lift variation over a cycle of oscillation, and hence the lift hysteresis loop, is estimated from the velocity distribution in the wake measured or computed for successive phases of the cycle. Experimentally, the estimated lift hysteresis loops are compared with available data from the literature as well as with limited force balance measurements. Computationally, the estimated lift variations are compared with the corresponding variation obtained from the surface pressure distribution. Four analytical formulations for the lift estimation from wake surveys are considered and relative successes of the four are discussed.

  10. Sense of meaning as a predictor of burnout in emergency physicians in Israel: a national survey

    OpenAIRE

    Ben-Itzhak, Shulamit; Dvash, Jonathan; Maor, Maya; Rosenberg, Noa; Halpern, Pinchas

    2015-01-01

    Objective Burnout is common in physicians and particularly acute in emergency physicians. Physician burnout may adversely affect physicians? lives and the quality of care they provide, but much remains unknown about its main contributing factors. The present study evaluated burnout rates and contributing factors in emergency physicians in Israel, specifically focusing on the role of a sense of meaning, which has received little attention in the literature concerning burnout in emergency physi...

  11. Sense of meaning as a predictor of burnout in emergency physicians in Israel: a national survey.

    Science.gov (United States)

    Ben-Itzhak, Shulamit; Dvash, Jonathan; Maor, Maya; Rosenberg, Noa; Halpern, Pinchas

    2015-12-01

    Burnout is common in physicians and particularly acute in emergency physicians. Physician burnout may adversely affect physicians' lives and the quality of care they provide, but much remains unknown about its main contributing factors. The present study evaluated burnout rates and contributing factors in emergency physicians in Israel, specifically focusing on the role of a sense of meaning, which has received little attention in the literature concerning burnout in emergency physicians. A multicenter study, involving a convenience sample of physicians working full-time in the emergency departments of 16 general hospitals in Israel, was conducted. Questionnaires were used to assess burnout, demographic characteristics, professional stress, emotional distress, satisfaction, and quality of professional life, and open-ended questions were used to evaluate subjective perception of job satisfaction. Seventy physicians completed the questionnaires; 71.4% reported significant burnout levels in at least one of the burnout measures, while 82% also reported medium or high levels of competency. Burnout levels were associated with work-life balance, work satisfaction, social support, depressive symptoms, stress, and preoccupying thoughts. Regression analysis yielded two significant factors associated with burnout: worry and a sense of existential meaning derived from work. In addition, 61%, 51%, and 17% of participants exhibited high emotional exhaustion, high depersonalization, and a low sense of personal accomplishment, respectively. These results indicate a high burnout rate in emergency physicians in Israel and highlight relevant positive and negative factors including the importance of addressing existential meaning in designing specific intervention programs to counter burnout.

  12. Empirical Estimates in Optimization Problems: Survey with Special Regard to Heavy Tails and Dependent Data

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta

    2012-01-01

    Roč. 19, č. 30 (2012), s. 92-111 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150; GA ČR GAP402/10/1610 Institutional support: RVO:67985556 Keywords : Stochastic optimization * empirical estimates * thin and heavy tails * independent and weak dependent random samples Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/kankova-empirical estimates in optimization problems survey with special regard to heavy tails and dependent data.pdf

  13. On the choice of statistical models for estimating occurrence and extinction from animal surveys

    Science.gov (United States)

    Dorazio, R.M.

    2007-01-01

    In surveys of natural animal populations the number of animals that are present and available to be detected at a sample location is often low, resulting in few or no detections. Low detection frequencies are especially common in surveys of imperiled species; however, the choice of sampling method and protocol also may influence the size of the population that is vulnerable to detection. In these circumstances, probabilities of animal occurrence and extinction will generally be estimated more accurately if the models used in data analysis account for differences in abundance among sample locations and for the dependence between site-specific abundance and detection. Simulation experiments are used to illustrate conditions wherein these types of models can be expected to outperform alternative estimators of population site occupancy and extinction. ?? 2007 by the Ecological Society of America.

  14. Estimation of population doses from diagnostic medical examinations in Japan, 1974. III. Per caput mean marrow dose and leukemia significant dose

    Energy Technology Data Exchange (ETDEWEB)

    Hashizume, T; Maruyama, T; Kumamoto, Y [National Inst. of Radiological Sciences, Chiba (Japan)

    1976-03-01

    The mean per capita marrow dose and leukemia-significant dose from radiographic and fluoroscopic examinations in Japan have been estimated based on a 1974 nation wide survey of randomly sampled hospitals and clinics. To determine the mean marrow dose to an individual from a certain exposure of a given type of examination, the active marrow in the whole body was divided into 119 parts for an adult and 103 for a child. Dosimetric points on which the individual marrow doses were determined were set up in the center of each marrow part. The individual marrow doses at the dosimetric points in the beams of practical diagnostic x-rays were calculated on the basis of the exposure data on the patients selected in the nation wide survey, using depth dose curves experimentally determined for diagnostic x-rays. The mean individual marrow dose was averaged over the active marrow by summing, for each dosimetric point, the product of the fraction of active marrow exposed and the individual marrow dose at the dosimetric point. The leukemia significant dose was calculated by adopting a weighting factor that is, a leukemia significant factor. The factor was determined from the shape of the time-incidence curve for radiation-induced leukemia from the Hiroshima A-bomb and from the survival statistics for the average population. The resultant mean per capita marrow dose from radiographic and fluoroscopic examination was 37.0 and 70.0 mrad/person/year, respectively, with a total of 107.05 mrad/person/year. The leukemia significant dose was 32.1 mrad/person/year for radiographic examination and 61.2 mrad/person/year, with a total of 93.3. These values were compared with those of 1960 and 1969.

  15. SURVEY ON ESTIMATING QALYS IN THE WESTERN REGION OF ROMANIA – THE CASE WITHOUT INTERVENTION

    OpenAIRE

    MARIUS IOAN PANTEA; DELIA GLIGOR

    2012-01-01

    Currently, assessing a population’s quality of life is considered one of the most important aspects in health interventions’ evaluation across most of the European countries. However, in Romania its utility is unfortunately overlooked. In this context, the paper aims at providing an accurate estimate of QALYs for healthcare investment projects, determining through a questionnaire survey the utilities associated to quality of life for five critical medical conditions and thus calculating the r...

  16. Estimating mortality from external causes using data from retrospective surveys: A validation study in Niakhar (Senegal

    Directory of Open Access Journals (Sweden)

    Gilles Pison

    2018-03-01

    Full Text Available Background: In low- and middle-income countries (LMICs, data on causes of death is often inaccurate or incomplete. In this paper, we test whether adding a few questions about injuries and accidents to mortality questionnaires used in representative household surveys would yield accurate estimates of the extent of mortality due to external causes (accidents, homicides, or suicides. Methods: We conduct a validation study in Niakhar (Senegal, during which we compare reported survey data to high-quality prospective records of deaths collected by a health and demographic surveillance system (HDSS. Results: Survey respondents more frequently list the deaths of their adult siblings who die of external causes than the deaths of those who die from other causes. The specificity of survey data is high, but sensitivity is low. Among reported deaths, less than 60Š of the deaths classified as due to external causes by the HDSS are also classified as such by survey respondents. Survey respondents better report deaths due to road-traffic accidents than deaths from suicides and homicides. Conclusions: Asking questions about deaths resulting from injuries and accidents during surveys might help measure mortality from external causes in LMICs, but the resulting data displays systematic bias in a rural population of Senegal. Future studies should 1 investigate whether similar biases also apply in other settings and 2 test new methods to further improve the accuracy of survey data on mortality from external causes. Contribution: This study helps strengthen the monitoring of sustainable development targets in LMICs by validating a simple approach for the measurement of mortality from external causes.

  17. Time-varying effect moderation using the structural nested mean model: estimation using inverse-weighted regression with residuals

    Science.gov (United States)

    Almirall, Daniel; Griffin, Beth Ann; McCaffrey, Daniel F.; Ramchand, Rajeev; Yuen, Robert A.; Murphy, Susan A.

    2014-01-01

    This article considers the problem of examining time-varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time-varying causal effects of interest in a conditional mean model for a continuous response given time-varying treatments and moderators. We present an easy-to-use estimator of the SNMM that combines an existing regression-with-residuals (RR) approach with an inverse-probability-of-treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time-varying causal effects if the time-varying moderators are also the sole time-varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time-varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time-varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time-varying moderators and time-varying confounders. We illustrate the methodology in a case study to assess if time-varying substance use moderates treatment effects on future substance use. PMID:23873437

  18. Bayes allocation of the sample for estimation of the mean when each stratum has a Poisson distribution

    International Nuclear Information System (INIS)

    Wright, T.

    1983-01-01

    Consider a stratified population with L strata, so that a Poisson random variable is associated with each stratum. The parameter associated with the hth stratum is theta/sub h/, h = 1, 2, ..., L. Let ω/sub h/ be the known proportion of the population in the hth stratum, h = 1, 2, ..., L. The authors want to estimate the parameter theta = summation from h = 1 to L ω/sub h/theta/sub h/. We assume that prior information is available on theta/sub h/ and that it can be expressed in terms of a gamma distribution with parameters α/sub h/ and β/sub h/, h = 1, 2, ..., L. We also assume that the prior distributions are independent. Using squared error loss function, a Bayes allocation of total sample size with a cost constraint is given. The Bayes estimate using the Bayes allocation is shown to have an adjusted mean square error which is strictly less than the adjusted mean square error of the classical estimate using the classical allocation

  19. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  20. Self-reported physical activity among blacks: estimates from national surveys.

    Science.gov (United States)

    Whitt-Glover, Melicia C; Taylor, Wendell C; Heath, Gregory W; Macera, Caroline A

    2007-11-01

    National surveillance data provide population-level estimates of physical activity participation, but generally do not include detailed subgroup analyses, which could provide a better understanding of physical activity among subgroups. This paper presents a descriptive analysis of self-reported regular physical activity among black adults using data from the 2003 Behavioral Risk Factor Surveillance System (n=19,189), the 2004 National Health Interview Survey (n=4263), and the 1999-2004 National Health and Nutrition Examination Survey (n=3407). Analyses were conducted between January and March 2006. Datasets were analyzed separately to estimate the proportion of black adults meeting national physical activity recommendations overall and stratified by gender and other demographic subgroups. The proportion of black adults reporting regular PA ranged from 24% to 36%. Regular physical activity was highest among men; younger age groups; highest education and income groups; those who were employed and married; overweight, but not obese, men; and normal-weight women. This pattern was consistent across surveys. The observed physical activity patterns were consistent with national trends. The data suggest that older black adults and those with low education and income levels are at greatest risk for inactive lifestyles and may require additional attention in efforts to increase physical activity in black adults. The variability across datasets reinforces the need for objective measures in national surveys.

  1. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  2. Estimation of Social Exclusion Indicators from Complex Surveys: The R Package laeken

    Directory of Open Access Journals (Sweden)

    Andreas Alfons

    2013-09-01

    Full Text Available Units sampled from finite populations typically come with different inclusion proba- bilities. Together with additional preprocessing steps of the raw data, this yields unequal sampling weights of the observations. Whenever indicators are estimated from such com- plex samples, the corresponding sampling weights have to be taken into account. In addition, many indicators suffer from a strong influence of outliers, which are a common problem in real-world data. The R package laeken is an object-oriented toolkit for the estimation of indicators from complex survey samples via standard or robust methods. In particular the most widely used social exclusion and poverty indicators are imple- mented in the package. A general calibrated bootstrap method to estimate the variance of indicators for common survey designs is included as well. Furthermore, the package contains synthetically generated close-to-reality data for the European Union Statistics on Income and Living Conditions and the Structure of Earnings Survey, which are used in the code examples throughout the paper. Even though the paper is focused on showing the functionality of package laeken, it also provides a brief mathematical description of the implemented indicator methodology.

  3. Simple algorithm to estimate mean-field effects from minor differential permeability curves based on the Preisach model

    International Nuclear Information System (INIS)

    Perevertov, Oleksiy

    2003-01-01

    The classical Preisach model (PM) of magnetic hysteresis requires that any minor differential permeability curve lies under minor curves with larger field amplitude. Measurements of ferromagnetic materials show that very often this is not true. By applying the classical PM formalism to measured minor curves one can discover that it leads to an oval-shaped region on each half of the Preisach plane where the calculations produce negative values in the Preisach function. Introducing an effective field, which differs from the applied one by a mean-field term proportional to the magnetization, usually solves this problem. Complex techniques exist to estimate the minimum necessary proportionality constant (the moving parameter). In this paper we propose a simpler way to estimate the mean-field effects for use in nondestructive testing, which is based on experience from the measurements of industrial steels. A new parameter (parameter of shift) is introduced, which monitors the mean-field effects. The relation between the shift parameter and the moving one was studied for a number of steels. From preliminary experiments no correlation was found between the shift parameter and the classical magnetic ones such as the coercive field, maximum differential permeability and remanent magnetization

  4. Mean magnetic susceptibility regularized susceptibility tensor imaging (MMSR-STI) for estimating orientations of white matter fibers in human brain.

    Science.gov (United States)

    Li, Xu; van Zijl, Peter C M

    2014-09-01

    An increasing number of studies show that magnetic susceptibility in white matter fibers is anisotropic and may be described by a tensor. However, the limited head rotation possible for in vivo human studies leads to an ill-conditioned inverse problem in susceptibility tensor imaging (STI). Here we suggest the combined use of limiting the susceptibility anisotropy to white matter and imposing morphology constraints on the mean magnetic susceptibility (MMS) for regularizing the STI inverse problem. The proposed MMS regularized STI (MMSR-STI) method was tested using computer simulations and in vivo human data collected at 3T. The fiber orientation estimated from both the STI and MMSR-STI methods was compared to that from diffusion tensor imaging (DTI). Computer simulations show that the MMSR-STI method provides a more accurate estimation of the susceptibility tensor than the conventional STI approach. Similarly, in vivo data show that use of the MMSR-STI method leads to a smaller difference between the fiber orientation estimated from STI and DTI for most selected white matter fibers. The proposed regularization strategy for STI can improve estimation of the susceptibility tensor in white matter. © 2014 Wiley Periodicals, Inc.

  5. Surface geothermal exploration in the Canary Islands by means of soil CO_{2} degassing surveys

    Science.gov (United States)

    García-Merino, Marta; Rodríguez, Fátima; Padrón, Eleazar; Melián, Gladys; Asensio-Ramos, María; Barrancos, José; Hernández, Pedro A.; Pérez, Nemesio M.

    2017-04-01

    With the exception of the Teide fumaroles, there is not any evidence of hydrothermal fluid discharges in the surficial environment of the Canary Islands, the only Spanish territory with potential high enthalpy geothermal resources. Here we show the results of several diffuse CO2 degassing surveys carried out at five mining licenses in Tenerife and Gran Canaria with the aim of sorting the possible geothermal potential of these five mining licenses. The primary objective of the study was to reduce the uncertainty inherent to the selection of the areas with highest geothermal potential for future exploration works. The yardstick used to classify the different areas was the contribution of volcano-hydrothermal CO2 in the diffuse CO2 degassing at each study area. Several hundreds of measurements of diffuse CO2 emission, soil CO2 concentration and isotopic composition were performed at each mining license. Based in three different endmembers (biogenic, atmospheric and deep-seated CO2) with different CO2 concentrations (100, 0.04 and 100%, respectively) and isotopic compositions (-24, -8 and -3 per mil vs. VPDB respectively) a mass balance to distinguish the different contribution of each endmember in the soil CO2 at each sampling site was made. The percentage of the volcano-hydrothermal contribution in the current diffuse CO2 degassing was in the range 0-19%. The Abeque mining license, that comprises part of the north-west volcanic rift of Tenerife, seemed to show the highest geothermal potential, with an average of 19% of CO2 being released from deep sources, followed by Atidama (south east of Gran Canaria) and Garehagua (southern volcanic rift of Tenerife), with 17% and 12% respectively.

  6. Estimating solar ultraviolet irradiance (290-385 nm by means of the spectral parametric models: SPCTRAL2 and SMARTS2

    Directory of Open Access Journals (Sweden)

    I. Foyo-Moreno

    2000-11-01

    Full Text Available Since the discovery of the ozone depletion in Antarctic and the globally declining trend of stratospheric ozone concentration, public and scientific concern has been raised in the last decades. A very important consequence of this fact is the increased broadband and spectral UV radiation in the environment and the biological effects and heath risks that may take place in the near future. The absence of widespread measurements of this radiometric flux has lead to the development and use of alternative estimation procedures such as the parametric approaches. Parametric models compute the radiant energy using available atmospheric parameters. Some parametric models compute the global solar irradiance at surface level by addition of its direct beam and diffuse components. In the present work, we have developed a comparison between two cloudless sky parametrization schemes. Both methods provide an estimation of the solar spectral irradiance that can be integrated spectrally within the limits of interest. For this test we have used data recorded in a radiometric station located at Granada (37.180°N, 3.580°W, 660 m a.m.s.l., an inland location. The database includes hourly values of the relevant variables covering the years 1994-95. The performance of the models has been tested in relation to their predictive capability of global solar irradiance in the UV range (290–385 nm. After our study, it appears that information concerning the aerosol radiative effects is fundamental in order to obtain a good estimation. The original version of SPCTRAL2 provides estimates of the experimental values with negligible mean bias deviation. This suggests not only the appropriateness of the model but also the convenience of the aerosol features fixed in it to Granada conditions. SMARTS2 model offers increased flexibility concerning the selection of different aerosol models included in the code and provides the best results when the selected models are those

  7. Estimating solar ultraviolet irradiance (290-385 nm by means of the spectral parametric models: SPCTRAL2 and SMARTS2

    Directory of Open Access Journals (Sweden)

    I. Foyo-Moreno

    Full Text Available Since the discovery of the ozone depletion in Antarctic and the globally declining trend of stratospheric ozone concentration, public and scientific concern has been raised in the last decades. A very important consequence of this fact is the increased broadband and spectral UV radiation in the environment and the biological effects and heath risks that may take place in the near future. The absence of widespread measurements of this radiometric flux has lead to the development and use of alternative estimation procedures such as the parametric approaches. Parametric models compute the radiant energy using available atmospheric parameters. Some parametric models compute the global solar irradiance at surface level by addition of its direct beam and diffuse components. In the present work, we have developed a comparison between two cloudless sky parametrization schemes. Both methods provide an estimation of the solar spectral irradiance that can be integrated spectrally within the limits of interest. For this test we have used data recorded in a radiometric station located at Granada (37.180°N, 3.580°W, 660 m a.m.s.l., an inland location. The database includes hourly values of the relevant variables covering the years 1994-95. The performance of the models has been tested in relation to their predictive capability of global solar irradiance in the UV range (290–385 nm. After our study, it appears that information concerning the aerosol radiative effects is fundamental in order to obtain a good estimation. The original version of SPCTRAL2 provides estimates of the experimental values with negligible mean bias deviation. This suggests not only the appropriateness of the model but also the convenience of the aerosol features fixed in it to Granada conditions. SMARTS2 model offers increased flexibility concerning the selection of different aerosol models included in the code and provides the best results when the selected models are those

  8. Assessment of dietary intake of flavouring substances within the procedure for their safety evaluation: advantages and limitations of estimates obtained by means of a per capita method.

    Science.gov (United States)

    Arcella, D; Leclercq, C

    2005-01-01

    The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.

  9. Social Firms as a means of vocational recovery for people with mental illness: a UK survey.

    Science.gov (United States)

    Gilbert, Eleanor; Marwaha, Steven; Milton, Alyssa; Johnson, Sonia; Morant, Nicola; Parsons, Nicholas; Fisher, Adrian; Singh, Swaran; Cunliffe, Di

    2013-07-11

    Employment is associated with better quality of life and wellbeing in people with mental illness. Unemployment is associated with greater levels of psychological illness and is viewed as a core part of the social exclusion faced by people with mental illness. Social Firms offer paid employment to people with mental illness but are under-investigated in the UK. The aims of this phase of the Social Firms A Route to Recovery (SoFARR) project were to describe the availability and spread of Social Firms across the UK, to outline the range of opportunities Social Firms offer people with severe mental illness and to understand the extent to which they are employed within these firms. A UK national survey of Social Firms, other social enterprises and supported businesses was completed to understand the extent to which they provide paid employment for the mentally ill. A study-specific questionnaire was developed. It covered two broad areas asking employers about the nature of the Social Firm itself and about the employees with mental illness working there. We obtained returns from 76 Social Firms and social enterprises / supported businesses employing 692 people with mental illness. Forty per cent of Social Firms were in the south of England, 24% in the North and the Midlands, 18% in Scotland and 18% in Wales. Other social enterprises/supported businesses were similarly distributed. Trading activities were confined mainly to manufacturing, service industry, recycling, horticulture and catering. The number of employees with mental illness working in Social Firms and other social enterprises/supported businesses was small (median of 3 and 6.5 respectively). Over 50% employed people with schizophrenia or bipolar disorder, though the greatest proportion of employees with mental illness had depression or anxiety. Over two thirds of Social Firms liaised with mental health services and over a quarter received funding from the NHS or a mental health charity. Most workers with

  10. Social firms as a means of vocational recovery for people with mental illness: a UK survey

    Science.gov (United States)

    2013-01-01

    Background Employment is associated with better quality of life and wellbeing in people with mental illness. Unemployment is associated with greater levels of psychological illness and is viewed as a core part of the social exclusion faced by people with mental illness. Social Firms offer paid employment to people with mental illness but are under-investigated in the UK. The aims of this phase of the Social Firms A Route to Recovery (SoFARR) project were to describe the availability and spread of Social Firms across the UK, to outline the range of opportunities Social Firms offer people with severe mental illness and to understand the extent to which they are employed within these firms. Method A UK national survey of Social Firms, other social enterprises and supported businesses was completed to understand the extent to which they provide paid employment for the mentally ill. A study-specific questionnaire was developed. It covered two broad areas asking employers about the nature of the Social Firm itself and about the employees with mental illness working there. Results We obtained returns from 76 Social Firms and social enterprises / supported businesses employing 692 people with mental illness. Forty per cent of Social Firms were in the south of England, 24% in the North and the Midlands, 18% in Scotland and 18% in Wales. Other social enterprises/supported businesses were similarly distributed. Trading activities were confined mainly to manufacturing, service industry, recycling, horticulture and catering. The number of employees with mental illness working in Social Firms and other social enterprises/supported businesses was small (median of 3 and 6.5 respectively). Over 50% employed people with schizophrenia or bipolar disorder, though the greatest proportion of employees with mental illness had depression or anxiety. Over two thirds of Social Firms liaised with mental health services and over a quarter received funding from the NHS or a mental health

  11. Feasibility online survey to estimate physical activity level among the students studying professional courses: a cross-sectional online survey.

    Science.gov (United States)

    Sudha, Bhumika; Samuel, Asir John; Narkeesh, Kanimozhi

    2018-02-01

    The aim of the study was to estimate the physical activity (PA) level among the professional college students in North India. One hundred three professional college students in the age group of 18-25 years were recruited by simple random sampling for this cross-sectional online survey. The survey was advertised on the social networking sites (Facebook, WhatsApp) through a link www.surveymonkey.com/r/MG-588BY. A Short Form of International Physical Activity Questionnaire was used for this survey study. The questionnaire included total 8 questions on the basis of previous 7 days. The questionnaire consists of 3 main categories which were vigorous, moderate and high PA. Time spent in each activity level was multiplied with the metabolic equivalent of task (MET), which has previously set to 8.0 for vigorous activity, 4.0 for moderate activity, 3.3 for walking, and 1.5 for sitting. By multiplying MET with number of days and minutes performed weekly, amount of each activity level was calculated and measured as MET-min/wk. Further by adding MET minutes for each activity level, total MET-min/wk was calculated. Total number of 100 students participated in this study, and it was shown that all professional course students show different levels in PA. The total PA level among professional college students, which includes, physiotherapy, dental, medical, nursing, lab technician, pharmacy, management, law, engineering, were 434.4 (0-7,866), 170.3 (0-1,129), 87.7 (0-445), 102.8 (0-180), 469 (0-1,164), 0 (0-0), 645 (0-1,836), 337 (0-1,890), 396 (0-968) MET-min/wk respectively. PA levels among professional college students in North India have been established.

  12. Estimation of mean time to failure of a near surface radioactive waste repository for PWR power stations

    International Nuclear Information System (INIS)

    Aguiar, Lais A. de; Frutuoso e Melo, P.F.; Alvim, Antonio C.M.

    2007-01-01

    This work aims at estimating the mean time to failure (MTTF) of each barrier of a near surface radioactive waste repository. It is assumed that surface water infiltrates through the barriers, reaching the matrix where radionuclides are contained, releasing them to the environment. Radioactive wastes considered in this work are low and medium level wastes (produced during operation of a PWR nuclear power station) fixed on cement. The repository consists of 6 saturated porous media barriers (top cover, upper layer, packages, basis, repository walls and geosphere). It has been verified that the mean time to failure (MTTF) of each barrier increases for radionuclides having higher retardation factor (Fr) and also that the MTTF for concrete is larger for Nickel , while for the geosphere, Plutonium gives the largest MTTF. (author)

  13. Tracking Psychosocial Health in Adults with Epilepsy—Estimates from the 2010 National Health Interview Survey

    Science.gov (United States)

    Kobau, R; Cui, W; Kadima, N; Zack, MM; Sajatovic, M; Kaiboriboon, K; Jobst, B

    2015-01-01

    Objective This study provides population-based estimates of psychosocial health among U.S. adults with epilepsy from the 2010 National Health Interview Survey. Methods Multinomial logistic regression was used to estimate the prevalence of the following measures of psychosocial health among adults with and those without epilepsy: 1) the Kessler-6 scale of Serious Psychological Distress; 2) cognitive limitation; the extent of impairments associated with psychological problems; and work limitation; 3) Social participation; and 4) the Patient Reported Outcome Measurement Information System Global Health scale. Results Compared with adults without epilepsy, adults with epilepsy, especially those with active epilepsy, reported significantly worse psychological health, more cognitive impairment, difficulty in participating in some social activities, and reduced health-related quality of life (HRQOL). Conclusions These disparities in psychosocial health in U.S. adults with epilepsy serve as baseline national estimates of their HRQOL, consistent with Healthy People 2020 national objectives on HRQOL. PMID:25305435

  14. The influence of survey duration on estimates of food intakes and its relevance for public health nutrition and food safety issues.

    Science.gov (United States)

    Lambe, J; Kearney, J; Leclercq, C; Zunft, H F; De Henauw, S; Lamberg-Allardt, C J; Dunne, A; Gibney, M J

    2000-02-01

    To examine the influence of food consumption survey duration on estimates of percentage consumers, mean total population intakes and intakes among consumers only and to consider its relevance for public health nutrition and food safety issues. Prospective food consumption survey. A multicentre study in five centres in the European Union-Dublin, Ghent, Helsinki, Potsdam and Rome. Teenage subjects were recruited through schools; 948 (80%) out of 1180 subjects completed the survey. 14-day food diaries were used to collect the food consumption data. For mean total population intakes, 53% of the foods had slopes significantly different to 0 (Pday), these differences were small, with 41% of foods having differences of day and a further 35% having differences of 1-5 g/day. Estimates of percentage consumers based on 3 days and 14 days were 1.9 and 3.6 times the 1-day estimate, respectively. For 72% of foods, at least 50% of non-consumers on day 1 became consumers over the subsequent 13 days. Estimates of mean consumer only intakes based on 3 days and 14 days were 53% and 32% of the 1 day value. In practical terms, survey duration influences estimates of percentage consumers and intakes among consumers only but not mean total population intakes. Awareness of this influence is important for improved interpretation of dietary data for epidemiological studies, development of food-based dietary guidelines and food chemical intakes. The Institute of European Food Studies, a non-profit research organization based in Trinity College Dublin. European Journal of Clinical Nutrition (2000) 54, 166-173

  15. Prevalence estimates of chronic kidney disease in Canada: results of a nationally representative survey

    Science.gov (United States)

    Arora, Paul; Vasa, Priya; Brenner, Darren; Iglar, Karl; McFarlane, Phil; Morrison, Howard; Badawi, Alaa

    2013-01-01

    Background: Chronic kidney disease is an important risk factor for death and cardiovascular-related morbidity, but estimates to date of its prevalence in Canada have generally been extrapolated from the prevalence of end-stage renal disease. We used direct measures of kidney function collected from a nationally representative survey population to estimate the prevalence of chronic kidney disease among Canadian adults. Methods: We examined data for 3689 adult participants of cycle 1 of the Canadian Health Measures Survey (2007–2009) for the presence of chronic kidney disease. We also calculated the age-standardized prevalence of cardiovascular risk factors by chronic kidney disease group. We cross-tabulated the estimated glomerular filtration rate (eGFR) with albuminuria status. Results: The prevalence of chronic kidney disease during the period 2007–2009 was 12.5%, representing about 3 million Canadian adults. The estimated prevalence of stage 3–5 disease was 3.1% (0.73 million adults) and albuminuria 10.3% (2.4 million adults). The prevalence of diabetes, hypertension and hypertriglyceridemia were all significantly higher among adults with chronic kidney disease than among those without it. The prevalence of albuminuria was high, even among those whose eGFR was 90 mL/min per 1.73 m2 or greater (10.1%) and those without diabetes or hypertension (9.3%). Awareness of kidney dysfunction among adults with stage 3–5 chronic kidney disease was low (12.0%). Interpretation: The prevalence of kidney dysfunction was substantial in the survey population, including individuals without hypertension or diabetes, conditions most likely to prompt screening for kidney dysfunction. These findings highlight the potential for missed opportunities for early intervention and secondary prevention of chronic kidney disease. PMID:23649413

  16. The Gender Wage Gap in Croatia – Estimating the Impact of Differing Rewards by Means of Counterfactual Distributions

    Directory of Open Access Journals (Sweden)

    Danijel Nestić

    2010-04-01

    Full Text Available The aim of this paper is to estimate the size of, changes in, and main factors contributing to gender-based wage differentials in Croatia. It utilizes microdata from the Labor Force Surveys of 1998 and 2008 and applies both OLS and quantile regression techniques to assess the gender wage gap across the wage distribution. The average unadjusted gender wage gap is found to be relatively low and declining. This paper argues that employed women in Croatia possess higher-quality labor market characteristics than men, especially in terms of education, but receive much lower rewards for these characteristics. The Machado-Mata decomposition technique is used to estimate the gender wage gap as the sole effect of differing rewards. The results suggest that due to differing rewards the gap exceeds 20 percent on average - twice the size of the unadjusted gap - and that it increased somewhat between 1998 and 2008. The gap is found to be the highest at the lower-to-middle part of the wage distribution.

  17. Estimating health expectancies from two cross-sectional surveys: The intercensal method

    Directory of Open Access Journals (Sweden)

    Michel Guillot

    2009-10-01

    Full Text Available Health expectancies are key indicators for monitoring the health of populations, as well as for informing debates about compression or expansion of morbidity. However, current methodologies for estimating them are not entirely satisfactory. They are either of limited applicability because of high data requirements (the multistate method or based on questionable assumptions (the Sullivan method. This paper proposes a new method, called the "intercensal" method, which relies on the multistate framework but uses widely available data. The method uses age-specific proportions "healthy" at two successive, independent cross-sectional health surveys, and, together with information on general mortality, solves for the set of transition probabilities that produces the observed sequence of proportions healthy. The system is solved by making realistic parametric assumptions about the age patterns of transition probabilities. Using data from the Health and Retirement Survey (HRS and from the National Health Interview Survey (NHIS, the method is tested against both the multistate method and the Sullivan method. We conclude that the intercensal approach is a promising framework for the indirect estimation of health expectancies.

  18. Assessment of distribution and abundance estimates for Mariana swiftlets (Aerodramus bartschi) via examination of survey methods

    Science.gov (United States)

    Johnson, Nathan C.; Haig, Susan M.; Mosher, Stephen M.

    2018-01-01

    We described past and present distribution and abundance data to evaluate the status of the endangered Mariana Swiftlet (Aerodramus bartschi), a little-known echolocating cave swiftlet that currently inhabits 3 of 5 formerly occupied islands in the Mariana archipelago. We then evaluated the survey methods used to attain these estimates via fieldwork carried out on an introduced population of Mariana Swiftlets on the island of O'ahu, Hawaiian Islands, to derive better methods for future surveys. We estimate the range-wide population of Mariana Swiftlets to be 5,704 individuals occurring in 15 caves on Saipan, Aguiguan, and Guam in the Marianas; and 142 individuals occupying one tunnel on O'ahu. We further confirm that swiftlets have been extirpated from Rota and Tinian and have declined on Aguiguan. Swiftlets have remained relatively stable on Guam and Saipan in recent years. Our assessment of survey methods used for Mariana Swiftlets suggests overestimates depending on the technique used. We suggest the use of night vision technology and other changes to more accurately reflect their distribution, abundance, and status.

  19. Simultaneous estimation of the in-mean and in-variance causal connectomes of the human brain.

    Science.gov (United States)

    Duggento, A; Passamonti, L; Guerrisi, M; Toschi, N

    2017-07-01

    In recent years, the study of the human connectome (i.e. of statistical relationships between non spatially contiguous neurophysiological events in the human brain) has been enormously fuelled by technological advances in high-field functional magnetic resonance imaging (fMRI) as well as by coordinated world wide data-collection efforts like the Human Connectome Project (HCP). In this context, Granger Causality (GC) approaches have recently been employed to incorporate information about the directionality of the influence exerted by a brain region on another. However, while fluctuations in the Blood Oxygenation Level Dependent (BOLD) signal at rest also contain important information about the physiological processes that underlie neurovascular coupling and associations between disjoint brain regions, so far all connectivity estimation frameworks have focused on central tendencies, hence completely disregarding so-called in-variance causality (i.e. the directed influence of the volatility of one signal on the volatility of another). In this paper, we develop a framework for simultaneous estimation of both in-mean and in-variance causality in complex networks. We validate our approach using synthetic data from complex ensembles of coupled nonlinear oscillators, and successively employ HCP data to provide the very first estimate of the in-variance connectome of the human brain.

  20. CO/sub 2/ emission and agricultural productivity in southeast asian region: a pooled mean group estimation

    International Nuclear Information System (INIS)

    Islam, M.; Kazi, M.

    2014-01-01

    Frequent natural calamities, extreme climatic events and unexpected seasonal changes are the obvious examples of global warming. Carbon emissions by industrial units all over the world are believed to be the major contributor of the global warming that can lead to reduced agricultural productivity. This paper examines the impact of CO emission on agricultural productivity in Southeast Asian countries. It investigates the dynamic relationship between CO emission (along with other control-variables) and agricultural output using panel data set comprising data from Southeast Asian countries. Following the dynamic heterogeneous panel techniques developed by Pesaran and Shin (1999) for estimating the short- run and long-run effects using autoregressive distributed lag (ARDL) model in the error correction form, the study then estimated the empirical model based on pooled mean group (PMG) estimator. The study found that increased CO emission resulted in higher agricultural productivity because of the fact that farmers around the globe quickly adapt to climate change. In addition, use of submersible pump and other capital machineries significantly increased agricultural yield and led to reduced dependency on human capital, while use of chemical fertilizers increased productivity in short-run but had a harmful impact in the long-run. (author)

  1. The estimation of local marine dispersion of radionuclides from hydrographic survey data

    International Nuclear Information System (INIS)

    Maul, P.R.

    1985-05-01

    One of the most important stages in the assessment of the radiological impact of routine discharges of activity to the sea is the estimation of the local dispersion characteristics. Existing methods for defining the parameters required by the computer program CODAR2 are expanded to take into account the significance of the turbulence generated by the discharge, the effect of a shelving sea bed and the variation with time of the lateral dispersion coefficient. These methods also enable the importance of the timing of discharges and the variation of radionuclide concentrations along the coast to be considered. Calculations of local marine dispersion depend directly upon the information that is available from hydrographic surveys. Detailed consideration is given to the definition of model parameter values from data that are generally available from such surveys. The uncertainties involved in mathematical modelling and parameter specification suggest that the long term average radionuclide concentration in the vicinity of the release can be estimated to within a factor of 2 or 3, with estimates more likely to be greater than, rather than less than the actual value. This uncertainty will contribute to the net uncertainty in any radiological assessment of critical group exposure. (author)

  2. The transition to early fatherhood: National estimates based on multiple surveys

    Directory of Open Access Journals (Sweden)

    H. Elizabeth Peters

    2008-04-01

    Full Text Available This study provides systematic information about the prevalence of early male fertility and the relationship between family background characteristics and early parenthood across three widely used data sources: the 1979 and 1997 National Longitudinal Surveys of Youth and the 2002 National Survey of Family Growth. We provide descriptive statistics on early fertility by age, sex, race, cohort, and data set. Because each data set includes birth cohorts with varying early fertility rates, prevalence estimates for early male fertility are relatively similar across data sets. Associations between background characteristics and early fertility in regression models are less consistent across data sets. We discuss the implications of these findings for scholars doing research on early male fertility.

  3. Data Processing Procedures and Methodology for Estimating Trip Distances for the 1995 American Travel Survey (ATS)

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, H.-L.; Rollow, J.

    2000-05-01

    The 1995 American Travel Survey (ATS) collected information from approximately 80,000 U.S. households about their long distance travel (one-way trips of 100 miles or more) during the year of 1995. It is the most comprehensive survey of where, why, and how U.S. residents travel since 1977. ATS is a joint effort by the U.S. Department of Transportation (DOT) Bureau of Transportation Statistics (BTS) and the U.S. Department of Commerce Bureau of Census (Census); BTS provided the funding and supervision of the project, and Census selected the samples, conducted interviews, and processed the data. This report documents the technical support for the ATS provided by the Center for Transportation Analysis (CTA) in Oak Ridge National Laboratory (ORNL), which included the estimation of trip distances as well as data quality editing and checking of variables required for the distance calculations.

  4. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  5. Validation of the Maslach Burnout Inventory-Human Services Survey for Estimating Burnout in Dental Students.

    Science.gov (United States)

    Montiel-Company, José María; Subirats-Roig, Cristian; Flores-Martí, Pau; Bellot-Arcís, Carlos; Almerich-Silla, José Manuel

    2016-11-01

    The aim of this study was to examine the validity and reliability of the Maslach Burnout Inventory-Human Services Survey (MBI-HSS) as a tool for assessing the prevalence and level of burnout in dental students in Spanish universities. The survey was adapted from English to Spanish. A sample of 533 dental students from 15 Spanish universities and a control group of 188 medical students self-administered the survey online, using the Google Drive service. The test-retest reliability or reproducibility showed an Intraclass Correlation Coefficient of 0.95. The internal consistency of the survey was 0.922. Testing the construct validity showed two components with an eigenvalue greater than 1.5, which explained 51.2% of the total variance. Factor I (36.6% of the variance) comprised the items that estimated emotional exhaustion and depersonalization. Factor II (14.6% of the variance) contained the items that estimated personal accomplishment. The cut-off point for the existence of burnout achieved a sensitivity of 92.2%, a specificity of 92.1%, and an area under the curve of 0.96. Comparison of the total dental students sample and the control group of medical students showed significantly higher burnout levels for the dental students (50.3% vs. 40.4%). In this study, the MBI-HSS was found to be viable, valid, and reliable for measuring burnout in dental students. Since the study also found that the dental students suffered from high levels of this syndrome, these results suggest the need for preventive burnout control programs.

  6. Uncertainties estimation in surveying measurands: application to lengths, perimeters and areas

    Science.gov (United States)

    Covián, E.; Puente, V.; Casero, M.

    2017-10-01

    The present paper develops a series of methods for the estimation of uncertainty when measuring certain measurands of interest in surveying practice, such as points elevation given a planimetric position within a triangle mesh, 2D and 3D lengths (including perimeters enclosures), 2D areas (horizontal surfaces) and 3D areas (natural surfaces). The basis for the proposed methodology is the law of propagation of variance-covariance, which, applied to the corresponding model for each measurand, allows calculating the resulting uncertainty from known measurement errors. The methods are tested first in a small example, with a limited number of measurement points, and then in two real-life measurements. In addition, the proposed methods have been incorporated to commercial software used in the field of surveying engineering and focused on the creation of digital terrain models. The aim of this evolution is, firstly, to comply with the guidelines of the BIPM (Bureau International des Poids et Mesures), as the international reference agency in the field of metrology, in relation to the determination and expression of uncertainty; and secondly, to improve the quality of the measurement by indicating the uncertainty associated with a given level of confidence. The conceptual and mathematical developments for the uncertainty estimation in the aforementioned cases were conducted by researchers from the AssIST group at the University of Oviedo, eventually resulting in several different mathematical algorithms implemented in the form of MATLAB code. Based on these prototypes, technicians incorporated the referred functionality to commercial software, developed in C++. As a result of this collaboration, in early 2016 a new version of this commercial software was made available, which will be the first, as far as the authors are aware, that incorporates the possibility of estimating the uncertainty for a given level of confidence when computing the aforementioned surveying

  7. Reliability of Nationwide Prevalence Estimates of Dementia: A Critical Appraisal Based on Brazilian Surveys.

    Directory of Open Access Journals (Sweden)

    Flávio Chaimowicz

    Full Text Available The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil.We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815. Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%.The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations.

  8. Estimating the abundance of the Southern Hudson Bay polar bear subpopulation with aerial surveys

    Science.gov (United States)

    Obbard, Martyn E.; Stapleton, Seth P.; Middel, Kevin R.; Thibault, Isabelle; Brodeur, Vincent; Jutras, Charles

    2015-01-01

    The Southern Hudson Bay (SH) polar bear subpopulation occurs at the southern extent of the species’ range. Although capture–recapture studies indicate abundance was likely unchanged between 1986 and 2005, declines in body condition and survival occurred during the period, possibly foreshadowing a future decrease in abundance. To obtain a current estimate of abundance, we conducted a comprehensive line transect aerial survey of SH during 2011–2012. We stratified the study site by anticipated densities and flew coastal contour transects and systematically spaced inland transects in Ontario and on Akimiski Island and large offshore islands in 2011. Data were collected with double-observer and distance sampling protocols. We surveyed small islands in James Bay and eastern Hudson Bay and flew a comprehensive transect along the Québec coastline in 2012. We observed 667 bears in Ontario and on Akimiski Island and nearby islands in 2011, and we sighted 80 bears on offshore islands during 2012. Mark–recapture distance sampling and sight–resight models yielded an estimate of 860 (SE = 174) for the 2011 study area. Our estimate of abundance for the entire SH subpopulation (943; SE = 174) suggests that abundance is unlikely to have changed significantly since 1986. However, this result should be interpreted cautiously because of the methodological differences between historical studies (physical capture–recapture) and this survey. A conservative management approach is warranted given previous increases in duration of the ice-free season, which are predicted to continue in the future, and previously documented declines in body condition and vital rates.

  9. Estimating abundance of the Southern Hudson Bay polar bear subpopulation using aerial surveys, 2011 and 2012

    Science.gov (United States)

    Obbard, Martyn E.; Middel, Kevin R.; Stapleton, Seth P.; Thibault, Isabelle; Brodeur, Vincent; Jutras, Charles

    2013-01-01

    The Southern Hudson Bay (SH) polar bear subpopulation occurs at the southern extent of the species’ range. Although capture-recapture studies indicate that abundance remained stable between 1986 and 2005, declines in body condition and survival were documented during the period, possibly foreshadowing a future decrease in abundance. To obtain a current estimate of abundance, we conducted a comprehensive line transect aerial survey of SH during 2011–2012. We stratified the study site by anticipated densities and flew coastal contour transects and systematically spaced inland transects in Ontario and on Akimiski Island and large offshore islands in 2011. Data were collected with double observer and distance sampling protocols. We also surveyed small islands in Hudson Bay and James Bay and flew a comprehensive transect along the Québec coastline in 2012. We observed 667 bears in Ontario and on Akimiski Island and nearby islands in 2011, and we sighted 80 bears on offshore islands during 2012. Mark-recapture distance sampling and sightresight models yielded a model-averaged estimate of 868 (SE: 177) for the 2011 study area. Our estimate of abundance for the entire SH subpopulation (951; SE: 177) suggests that abundance has remained unchanged. However, this result should be interpreted cautiously because of the methodological differences between historical studies (physical capture) and this survey. A conservative management approach is warranted given the previous increases in the duration of the ice-free season, which are predicted to continue in the future, and previously documented declines in body condition and vital rates.

  10. Estimating family planning coverage from contraceptive prevalence using national household surveys.

    Science.gov (United States)

    Barros, Aluisio J D; Boerma, Ties; Hosseinpoor, Ahmad R; Restrepo-Méndez, María C; Wong, Kerry L M; Victora, Cesar G

    2015-01-01

    Contraception is one of the most important health interventions currently available and yet, many women and couples still do not have reliable access to modern contraceptives. The best indicator for monitoring family planning is the proportion of women using contraception among those who need it. This indicator is frequently called demand for family planning satisfied and we argue that it should be called family planning coverage (FPC). This indicator is complex to calculate and requires a considerable number of questions to be included in a household survey. We propose a model that can predict FPC from a much simpler indicator - contraceptive use prevalence - for situations where it cannot be derived directly. Using 197 Multiple Indicator Cluster Surveys and Demographic and Health Surveys from 82 countries, we explored least-squares regression models that could be used to predict FPC. Non-linearity was expected in this situation and we used a fractional polynomial approach to find the best fitting model. We also explored the effect of calendar time and of wealth on the models explored. Given the high correlation between the variables involved in FPC, we managed to derive a relatively simple model that depends only on contraceptive use prevalence but explains 95% of the variability of the outcome, with high precision for the estimated regression line. We also show that the relationship between the two variables has not changed with time. A concordance analysis showed agreement between observed and fitted results within a range of ±9 percentage points. We show that it is possible to obtain fairly good estimates of FPC using only contraceptive prevalence as a predictor, a strategy that is useful in situations where it is not possible to estimate FPC directly.

  11. Estimated rate of agricultural injury: the Korean Farmers’ Occupational Disease and Injury Survey

    Science.gov (United States)

    2014-01-01

    Objectives This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. Methods The first Korean Farmers’ Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. Results We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45–0.45) compared to males. However, the odds of injury among farmers aged 50–59 (OR: 1.53, 95% CI = 1.46–1.60), 60–69 (OR: 1.45, 95% CI = 1.39–1.51), and ≥70 (OR: 1.94, 95% CI = 1.86–2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Conclusions Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required. PMID:24808945

  12. Estimated rate of agricultural injury: the Korean Farmers' Occupational Disease and Injury Survey.

    Science.gov (United States)

    Chae, Hyeseon; Min, Kyungdoo; Youn, Kanwoo; Park, Jinwoo; Kim, Kyungran; Kim, Hyocher; Lee, Kyungsuk

    2014-01-01

    This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. The first Korean Farmers' Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45-0.45) compared to males. However, the odds of injury among farmers aged 50-59 (OR: 1.53, 95% CI = 1.46-1.60), 60-69 (OR: 1.45, 95% CI = 1.39-1.51), and ≥70 (OR: 1.94, 95% CI = 1.86-2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required.

  13. Estimates of the mean alcohol concentration of the spirits, wine, and beer sold in the United States and per capita consumption: 1950 to 2002.

    Science.gov (United States)

    Kerr, William C; Greenfield, Thomas K; Tujague, Jennifer

    2006-09-01

    Estimates of per capita consumption of alcohol in the United States require estimates of the mean alcohol content by volume (%ABV) of the beer, wine, and spirits sold to convert beverage volume to gallons of pure alcohol. The mean %ABV of spirits is estimated for each year from 1950 to 2002 and for each state using the %ABV of major brands and sales of sprits types. The mean %ABV of beer and wine is extrapolated to cover this period based on previous estimates. These mean %ABVs are then applied to alcohol sales figures to calculate new yearly estimates of per capita consumption of beer, wine, spirits, and total alcohol for the United States population aged 15 and older. The mean %ABV for spirits is found to be lower than previous estimates and to vary considerably over time and across states. Resultant per capita consumption estimates indicate that more alcohol was consumed from beer and less from wine and spirits than found in previous estimates. Empirically based calculation of mean %ABV for beer, wine, and spirits sold in the United States results in different and presumably more accurate per capita consumption estimates than heretofore available. Utilization of the new estimates in aggregate time-series and cross-sectional models of alcohol consumption and related outcomes may improve the accuracy and precision of such models.

  14. Intelligence in Bali--A Case Study on Estimating Mean IQ for a Population Using Various Corrections Based on Theory and Empirical Findings

    Science.gov (United States)

    Rindermann, Heiner; te Nijenhuis, Jan

    2012-01-01

    A high-quality estimate of the mean IQ of a country requires giving a well-validated test to a nationally representative sample, which usually is not feasible in developing countries. So, we used a convenience sample and four corrections based on theory and empirical findings to arrive at a good-quality estimate of the mean IQ in Bali. Our study…

  15. Estimating Power Outage Cost based on a Survey for Industrial Customers

    Science.gov (United States)

    Yoshida, Yoshikuni; Matsuhashi, Ryuji

    A survey was conducted on power outage cost for industrial customers. 5139 factories, which are designated energy management factories in Japan, answered their power consumption and the loss of production value due to the power outage in an hour in summer weekday. The median of unit cost of power outage of whole sectors is estimated as 672 yen/kWh. The sector of services for amusement and hobbies and the sector of manufacture of information and communication electronics equipment relatively have higher unit cost of power outage. Direct damage cost from power outage in whole sectors reaches 77 billion yen. Then utilizing input-output analysis, we estimated indirect damage cost that is caused by the repercussion of production halt. Indirect damage cost in whole sectors reaches 91 billion yen. The sector of wholesale and retail trade has the largest direct damage cost. The sector of manufacture of transportation equipment has the largest indirect damage cost.

  16. A survey of kernel-type estimators for copula and their applications

    Science.gov (United States)

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  17. Survey of engineering computational methods and experimental programs for estimating supersonic missile aerodynamic characteristics

    Science.gov (United States)

    Sawyer, W. C.; Allen, J. M.; Hernandez, G.; Dillenius, M. F. E.; Hemsch, M. J.

    1982-01-01

    This paper presents a survey of engineering computational methods and experimental programs used for estimating the aerodynamic characteristics of missile configurations. Emphasis is placed on those methods which are suitable for preliminary design of conventional and advanced concepts. An analysis of the technical approaches of the various methods is made in order to assess their suitability to estimate longitudinal and/or lateral-directional characteristics for different classes of missile configurations. Some comparisons between the predicted characteristics and experimental data are presented. These comparisons are made for a large variation in flow conditions and model attitude parameters. The paper also presents known experimental research programs developed for the specific purpose of validating analytical methods and extending the capability of data-base programs.

  18. Empirical model for mean temperature for Indian zone and estimation of precipitable water vapor from ground based GPS measurements

    Directory of Open Access Journals (Sweden)

    C. Suresh Raju

    2007-10-01

    Full Text Available Estimation of precipitable water (PW in the atmosphere from ground-based Global Positioning System (GPS essentially involves modeling the zenith hydrostatic delay (ZHD in terms of surface Pressure (Ps and subtracting it from the corresponding values of zenith tropospheric delay (ZTD to estimate the zenith wet (non-hydrostatic delay (ZWD. This further involves establishing an appropriate model connecting PW and ZWD, which in its simplest case assumed to be similar to that of ZHD. But when the temperature variations are large, for the accurate estimate of PW the variation of the proportionality constant connecting PW and ZWD is to be accounted. For this a water vapor weighted mean temperature (Tm has been defined by many investigations, which has to be modeled on a regional basis. For estimating PW over the Indian region from GPS data, a region specific model for Tm in terms of surface temperature (Ts is developed using the radiosonde measurements from eight India Meteorological Department (IMD stations spread over the sub-continent within a latitude range of 8.5°–32.6° N. Following a similar procedure Tm-based models are also evolved for each of these stations and the features of these site-specific models are compared with those of the region-specific model. Applicability of the region-specific and site-specific Tm-based models in retrieving PW from GPS data recorded at the IGS sites Bangalore and Hyderabad, is tested by comparing the retrieved values of PW with those estimated from the altitude profile of water vapor measured using radiosonde. The values of ZWD estimated at 00:00 UTC and 12:00 UTC are used to test the validity of the models by estimating the PW using the models and comparing it with those obtained from radiosonde data. The region specific Tm-based model is found to be in par with if not better than a

  19. Comparison between geodetic and oceanographic approaches to estimate mean dynamic topography for vertical datum unification: evaluation at Australian tide gauges

    Science.gov (United States)

    Filmer, M. S.; Hughes, C. W.; Woodworth, P. L.; Featherstone, W. E.; Bingham, R. J.

    2018-04-01

    The direct method of vertical datum unification requires estimates of the ocean's mean dynamic topography (MDT) at tide gauges, which can be sourced from either geodetic or oceanographic approaches. To assess the suitability of different types of MDT for this purpose, we evaluate 13 physics-based numerical ocean models and six MDTs computed from observed geodetic and/or ocean data at 32 tide gauges around the Australian coast. We focus on the viability of numerical ocean models for vertical datum unification, classifying the 13 ocean models used as either independent (do not contain assimilated geodetic data) or non-independent (do contain assimilated geodetic data). We find that the independent and non-independent ocean models deliver similar results. Maximum differences among ocean models and geodetic MDTs reach >150 mm at several Australian tide gauges and are considered anomalous at the 99% confidence level. These differences appear to be of geodetic origin, but without additional independent information, or formal error estimates for each model, some of these errors remain inseparable. Our results imply that some ocean models have standard deviations of differences with other MDTs (using geodetic and/or ocean observations) at Australian tide gauges, and with levelling between some Australian tide gauges, of ˜ ± 50 mm . This indicates that they should be considered as an alternative to geodetic MDTs for the direct unification of vertical datums. They can also be used as diagnostics for errors in geodetic MDT in coastal zones, but the inseparability problem remains, where the error cannot be discriminated between the geoid model or altimeter-derived mean sea surface.

  20. Statistical estimates of absenteeism attributable to seasonal and pandemic influenza from the Canadian Labour Force Survey.

    Science.gov (United States)

    Schanzer, Dena L; Zheng, Hui; Gilmore, Jason

    2011-04-12

    As many respiratory viruses are responsible for influenza like symptoms, accurate measures of the disease burden are not available and estimates are generally based on statistical methods. The objective of this study was to estimate absenteeism rates and hours lost due to seasonal influenza and compare these estimates with estimates of absenteeism attributable to the two H1N1 pandemic waves that occurred in 2009. Key absenteeism variables were extracted from Statistics Canada's monthly labour force survey (LFS). Absenteeism and the proportion of hours lost due to own illness or disability were modelled as a function of trend, seasonality and proxy variables for influenza activity from 1998 to 2009. Hours lost due to the H1N1/09 pandemic strain were elevated compared to seasonal influenza, accounting for a loss of 0.2% of potential hours worked annually. In comparison, an estimated 0.08% of hours worked annually were lost due to seasonal influenza illnesses. Absenteeism rates due to influenza were estimated at 12% per year for seasonal influenza over the 1997/98 to 2008/09 seasons, and 13% for the two H1N1/09 pandemic waves. Employees who took time off due to a seasonal influenza infection took an average of 14 hours off. For the pandemic strain, the average absence was 25 hours. This study confirms that absenteeism due to seasonal influenza has typically ranged from 5% to 20%, with higher rates associated with multiple circulating strains. Absenteeism rates for the 2009 pandemic were similar to those occurring for seasonal influenza. Employees took more time off due to the pandemic strain than was typical for seasonal influenza.

  1. Statistical estimates of absenteeism attributable to seasonal and pandemic influenza from the Canadian Labour Force Survey

    Directory of Open Access Journals (Sweden)

    Zheng Hui

    2011-04-01

    Full Text Available Abstract Background As many respiratory viruses are responsible for influenza like symptoms, accurate measures of the disease burden are not available and estimates are generally based on statistical methods. The objective of this study was to estimate absenteeism rates and hours lost due to seasonal influenza and compare these estimates with estimates of absenteeism attributable to the two H1N1 pandemic waves that occurred in 2009. Methods Key absenteeism variables were extracted from Statistics Canada's monthly labour force survey (LFS. Absenteeism and the proportion of hours lost due to own illness or disability were modelled as a function of trend, seasonality and proxy variables for influenza activity from 1998 to 2009. Results Hours lost due to the H1N1/09 pandemic strain were elevated compared to seasonal influenza, accounting for a loss of 0.2% of potential hours worked annually. In comparison, an estimated 0.08% of hours worked annually were lost due to seasonal influenza illnesses. Absenteeism rates due to influenza were estimated at 12% per year for seasonal influenza over the 1997/98 to 2008/09 seasons, and 13% for the two H1N1/09 pandemic waves. Employees who took time off due to a seasonal influenza infection took an average of 14 hours off. For the pandemic strain, the average absence was 25 hours. Conclusions This study confirms that absenteeism due to seasonal influenza has typically ranged from 5% to 20%, with higher rates associated with multiple circulating strains. Absenteeism rates for the 2009 pandemic were similar to those occurring for seasonal influenza. Employees took more time off due to the pandemic strain than was typical for seasonal influenza.

  2. Statistical estimates of absenteeism attributable to seasonal and pandemic influenza from the Canadian Labour Force Survey

    Science.gov (United States)

    2011-01-01

    Background As many respiratory viruses are responsible for influenza like symptoms, accurate measures of the disease burden are not available and estimates are generally based on statistical methods. The objective of this study was to estimate absenteeism rates and hours lost due to seasonal influenza and compare these estimates with estimates of absenteeism attributable to the two H1N1 pandemic waves that occurred in 2009. Methods Key absenteeism variables were extracted from Statistics Canada's monthly labour force survey (LFS). Absenteeism and the proportion of hours lost due to own illness or disability were modelled as a function of trend, seasonality and proxy variables for influenza activity from 1998 to 2009. Results Hours lost due to the H1N1/09 pandemic strain were elevated compared to seasonal influenza, accounting for a loss of 0.2% of potential hours worked annually. In comparison, an estimated 0.08% of hours worked annually were lost due to seasonal influenza illnesses. Absenteeism rates due to influenza were estimated at 12% per year for seasonal influenza over the 1997/98 to 2008/09 seasons, and 13% for the two H1N1/09 pandemic waves. Employees who took time off due to a seasonal influenza infection took an average of 14 hours off. For the pandemic strain, the average absence was 25 hours. Conclusions This study confirms that absenteeism due to seasonal influenza has typically ranged from 5% to 20%, with higher rates associated with multiple circulating strains. Absenteeism rates for the 2009 pandemic were similar to those occurring for seasonal influenza. Employees took more time off due to the pandemic strain than was typical for seasonal influenza. PMID:21486453

  3. How social processes distort measurement: the impact of survey nonresponse on estimates of volunteer work in the United States.

    Science.gov (United States)

    Abraham, Katharine G; Presser, Stanley; Helms, Sara

    2009-01-01

    The authors argue that both the large variability in survey estimates of volunteering and the fact that survey estimates do not show the secular decline common to other social capital measures are caused by the greater propensity of those who do volunteer work to respond to surveys. Analyses of the American Time Use Survey (ATUS)--the sample for which is drawn from the Current Population Survey (CPS)--together with the CPS volunteering supplement show that CPS respondents who become ATUS respondents report much more volunteering in the CPS than those who become ATUS nonrespondents. This difference is replicated within subgroups. Consequently, conventional adjustments for nonresponse cannot correct the bias. Although nonresponse leads to estimates of volunteer activity that are too high, it generally does not affect inferences about the characteristics of volunteers.

  4. Estimation of urban residential electricity demand in China using household survey data

    International Nuclear Information System (INIS)

    Zhou, Shaojie; Teng, Fei

    2013-01-01

    This paper uses annual urban household survey data of Sichuan Province from 2007 to 2009 to estimate the income and price elasticities of residential electricity demand, along with the effects of lifestyle-related variables. The empirical results show that in the urban area of Sichuan province, the residential electricity demand is price- and income-inelastic, with price and income elasticities ranging from −0.35 to −0.50 and from 0.14 to 0.33, respectively. Such lifestyle-related variables as demographic variables, dwelling size and holdings of home appliances, are also important determinants of residential electricity demand, especially the latter. These results are robust to a variety of sensitivity tests. The research findings imply that urban residential electricity demand continues to increase with the growth of income. The empirical results have important policy implications for the Multistep Electricity Price, which been adopted in some cities and is expected to be promoted nationwide through the installation of energy-efficient home appliances. - Highlights: • We estimate price and income elasticities in China using household survey data. • The current study is the first such study in China at this level. • Both price and income are inelastic. • Behavior factors have important impact on electricity consumption

  5. Application of airborne gamma spectrometric survey data to estimating terrestrial gamma-ray dose rates: An example in California

    International Nuclear Information System (INIS)

    Wollenberg, H.A.; Revzan, K.L.; Smith, A.R.

    1992-01-01

    The authors examine the applicability of radioelement data from the National Aerial Radiometric Reconnaissance (NARR) to estimate terrestrial gamma-ray absorbed dose rates, by comparing dose rates calculated from aeroradiometric surveys of U, Th, and K concentrations in 1 x 2 degree quadrangles with dose rates calculated from a radiogeologic data base and the distribution of lithologies in California. Gamma-ray dose rates increase generally from north to south following lithological trends. Low values of 25--30 nG/h occur in the northernmost quadrangles where low-radioactivity basaltic and ultramafic rocks predominate. Dose rates then increase southward due to the preponderance of clastic sediments and basic volcanics of the Franciscan Formation and Sierran metamorphics in north central and central California, and to increasing exposure southward of the Sierra Nevada batholith, Tertiary marine sedimentary rocks, intermediate to acidic volcanics, and granitic rocks of the Coast Ranges. High values, to 100 nGy/h occur in southeastern California, due primarily to the presence of high-radioactivity Precambrian and pre Cenozoic metamorphic rocks. Lithologic-based estimates of mean dose rates in the quadrangles generally match those from aeroradiometric data, with statewide means of 63 and 60 nGy/h, respectively. These are intermediate between a population-weighted global average of 51 nGy/h and a weighted continental average of 70 nGy/h, based on the global distribution of rock types. The concurrence of lithologically- and aeroradiometrically- determined dose rates in California, with its varied geology and topography encompassing settings representative of the continents, indicates that the NARR data are applicable to estimates of terrestrial absorbed dose rates from natural gamma emitters

  6. THE NEXT GENERATION VIRGO CLUSTER SURVEY. XV. THE PHOTOMETRIC REDSHIFT ESTIMATION FOR BACKGROUND SOURCES

    Energy Technology Data Exchange (ETDEWEB)

    Raichoor, A.; Mei, S.; Huertas-Company, M.; Licitra, R. [GEPI, Observatoire de Paris, CNRS, Université Paris Diderot, 61 Avenue de l' Observatoire, F-75014 Paris (France); Erben, T.; Hildebrandt, H. [Argelander-Institut für Astronomie, University of Bonn, Auf dem Hügel 71, D-53121 Bonn (Germany); Ilbert, O.; Boissier, S.; Boselli, A. [Aix Marseille Université, CNRS, Laboratoire d' Astrophysique de Marseille, UMR 7326, F-13388 Marseille (France); Ball, N. M.; Côté, P.; Ferrarese, L.; Gwyn, S. D. J.; Kavelaars, J. J. [Herzberg Institute of Astrophysics, National Research Council of Canada, Victoria, BC V9E 2E7 (Canada); Chen, Y.-T. [Insitute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 106, Taiwan (China); Cuillandre, J.-C. [Canada-France-Hawaïi Telescope Corporation, Kamuela, HI 96743 (United States); Duc, P. A. [Laboratoire AIM Paris-Saclay, CEA/IRFU/SAp, CNRS/INSU, Université Paris Diderot, F-91191 Gif-sur-Yvette Cedex (France); Durrell, P. R. [Department of Physics and Astronomy, Youngstown State University, Youngstown, OH 44555 (United States); Guhathakurta, P. [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Lançon, A., E-mail: anand.raichoor@obspm.fr [Observatoire Astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l' Université, F-67000 Strasbourg (France); and others

    2014-12-20

    The Next Generation Virgo Cluster Survey (NGVS) is an optical imaging survey covering 104 deg{sup 2} centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz bands and one third in the r band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point-spread function homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior that extends to i {sub AB} = 12.5 mag. When using the u* griz bands, our photometric redshifts for 15.5 mag ≤ i ≲ 23 mag or z {sub phot} ≲ 1 galaxies have a bias |Δz| < 0.02, less than 5% outliers, a scatter σ{sub outl.rej.}, and an individual error on z {sub phot} that increases with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz bands over the same magnitude and redshift range, the lack of the r band increases the uncertainties in the 0.3 ≲ z {sub phot} ≲ 0.8 range (–0.05 < Δz < –0.02, σ{sub outl.rej} ∼ 0.06, 10%-15% outliers, and z {sub phot.err.} ∼ 0.15). We also present a joint analysis of the photometric redshift accuracy as a function of redshift and magnitude. We assess the quality of our photometric redshifts by comparison to spectroscopic samples and by verifying that the angular auto- and cross-correlation function w(θ) of the entire NGVS photometric redshift sample across redshift bins is in agreement with the expectations.

  7. THE NEXT GENERATION VIRGO CLUSTER SURVEY. XV. THE PHOTOMETRIC REDSHIFT ESTIMATION FOR BACKGROUND SOURCES

    International Nuclear Information System (INIS)

    Raichoor, A.; Mei, S.; Huertas-Company, M.; Licitra, R.; Erben, T.; Hildebrandt, H.; Ilbert, O.; Boissier, S.; Boselli, A.; Ball, N. M.; Côté, P.; Ferrarese, L.; Gwyn, S. D. J.; Kavelaars, J. J.; Chen, Y.-T.; Cuillandre, J.-C.; Duc, P. A.; Durrell, P. R.; Guhathakurta, P.; Lançon, A.

    2014-01-01

    The Next Generation Virgo Cluster Survey (NGVS) is an optical imaging survey covering 104 deg 2 centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz bands and one third in the r band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point-spread function homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior that extends to i AB = 12.5 mag. When using the u* griz bands, our photometric redshifts for 15.5 mag ≤ i ≲ 23 mag or z phot ≲ 1 galaxies have a bias |Δz| < 0.02, less than 5% outliers, a scatter σ outl.rej. , and an individual error on z phot that increases with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz bands over the same magnitude and redshift range, the lack of the r band increases the uncertainties in the 0.3 ≲ z phot ≲ 0.8 range (–0.05 < Δz < –0.02, σ outl.rej ∼ 0.06, 10%-15% outliers, and z phot.err. ∼ 0.15). We also present a joint analysis of the photometric redshift accuracy as a function of redshift and magnitude. We assess the quality of our photometric redshifts by comparison to spectroscopic samples and by verifying that the angular auto- and cross-correlation function w(θ) of the entire NGVS photometric redshift sample across redshift bins is in agreement with the expectations

  8. Integrating national surveys to estimate small area variations in poor health and limiting long-term illness in Great Britain.

    Science.gov (United States)

    Moon, Graham; Aitken, Grant; Taylor, Joanna; Twigg, Liz

    2017-08-28

    This study aims to address, for the first time, the challenges of constructing small area estimates of health status using linked national surveys. The study also seeks to assess the concordance of these small area estimates with data from national censuses. Population level health status in England, Scotland and Wales. A linked integrated dataset of 23 374 survey respondents (16+ years) from the 2011 waves of the Health Survey for England (n=8603), the Scottish Health Survey (n=7537) and the Welsh Health Survey (n=7234). Population prevalence of poorer self-rated health and limiting long-term illness. A multilevel small area estimation modelling approach was used to estimate prevalence of these outcomes for middle super output areas in England and Wales and intermediate zones in Scotland. The estimates were then compared with matched measures from the contemporaneous 2011 UK Census. There was a strong positive association between the small area estimates and matched census measures for all three countries for both poorer self-rated health (r=0.828, 95% CI 0.821 to 0.834) and limiting long-term illness (r=0.831, 95% CI 0.824 to 0.837), although systematic differences were evident, and small area estimation tended to indicate higher prevalences than census data. Despite strong concordance, variations in the small area prevalences of poorer self-rated health and limiting long-term illness evident in census data cannot be replicated perfectly using small area estimation with linked national surveys. This reflects a lack of harmonisation between surveys over question wording and design. The nature of small area estimates as 'expected values' also needs to be better understood. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Binational Arsenic Exposure Survey: Methodology and Estimated Arsenic Intake from Drinking Water and Urinary Arsenic Concentrations

    Directory of Open Access Journals (Sweden)

    Robin B. Harris

    2012-03-01

    Full Text Available The Binational Arsenic Exposure Survey (BAsES was designed to evaluate probable arsenic exposures in selected areas of southern Arizona and northern Mexico, two regions with known elevated levels of arsenic in groundwater reserves. This paper describes the methodology of BAsES and the relationship between estimated arsenic intake from beverages and arsenic output in urine. Households from eight communities were selected for their varying groundwater arsenic concentrations in Arizona, USA and Sonora, Mexico. Adults responded to questionnaires and provided dietary information. A first morning urine void and water from all household drinking sources were collected. Associations between urinary arsenic concentration (total, organic, inorganic and estimated level of arsenic consumed from water and other beverages were evaluated through crude associations and by random effects models. Median estimated total arsenic intake from beverages among participants from Arizona communities ranged from 1.7 to 14.1 µg/day compared to 0.6 to 3.4 µg/day among those from Mexico communities. In contrast, median urinary inorganic arsenic concentrations were greatest among participants from Hermosillo, Mexico (6.2 µg/L whereas a high of 2.0 µg/L was found among participants from Ajo, Arizona. Estimated arsenic intake from drinking water was associated with urinary total arsenic concentration (p < 0.001, urinary inorganic arsenic concentration (p < 0.001, and urinary sum of species (p < 0.001. Urinary arsenic concentrations increased between 7% and 12% for each one percent increase in arsenic consumed from drinking water. Variability in arsenic intake from beverages and urinary arsenic output yielded counter intuitive results. Estimated intake of arsenic from all beverages was greatest among Arizonans yet participants in Mexico had higher urinary total and inorganic arsenic concentrations. Other contributors to urinary arsenic concentrations should be evaluated.

  10. A citizen science based survey method for estimating the density of urban carnivores

    Science.gov (United States)

    Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on

  11. Nationwide survey of dental radiographic examination and estimation of collective effective dose in Japan, 1999

    International Nuclear Information System (INIS)

    Iwai, Kazuo; Satomi, Chieko; Kawashima, Shoji; Hashimoto, Koji; Nishizawa, Kanae; Maruyama, Takashi

    2005-01-01

    A nationwide survey of dental X-ray examination in Japan was performed in 1999, and the effective exposure dose due to the dental X-ray examination was estimated. In Japan, most dental X-ray equipment are used at a tube voltage of 60 kV and a tube current of 10 mA. Dental film in speed group D is most frequently used for dental X ray examination. Fifty percent or more of dental clinics processed the films automatically. Seventy-five percent of dental clinics performed dental X-ray examinations in a separate X-ray room. The number of dental X-ray examinations in 1999 in Japan was estimated to be 82,301,000 for intra-oral radiography and 12,336,000 for panoramic radiography. The collective effective exposure dose in 1999 was estimated at 905.5 man·Sv, for intra-oral radiography and 128.9 man·Sv for panoramic radiography. (author)

  12. Using stable isotopes to estimate and compare mean residence times in contrasting geologic catchments (Attert River, NW Luxembourg)

    Science.gov (United States)

    Martínez-Carreras, N.; Fenicia, F.; Frentress, J.; Wrede, S.; Pfister, L.

    2012-04-01

    In recent years, stable isotopes have been increasingly used to characterize important aspects of catchment hydrological functioning, such as water storage dynamics, flow pathways and water sources. These characteristics are often synthesized by the Mean Residence Time (MRT), which is a simple catchment descriptor that employ the relation of distinct stable isotopic signatures in the rainfall input and streamflow output of a catchment that are significantly dampened through sub-surface propagation. In this preliminary study, MRT was estimated in the Attert River catchment (NW Luxembourg), where previous studies have shown that lithology exerts a major control on runoff generation. The Attert catchment lies at the transition zone of contrasting bedrock lithology: the Northern part is characterized by Devonian schist of the Ardennes massif, while sedimentary deposits of sandstone and marls dominate in the south of the catchment. As a consequence of differing lithologic characteristics, hydrological processes change across scales. The schistose catchments exhibit a delayed shallow groundwater component, sandstone catchments have slow-responding year-round groundwater component, whereas flashy runoff regimes prevails in the marly catchments. Under these circumstances, the MRTs are expected to vary significantly according to lithology, and provide additional understanding in internal catchment processes and their scale dependencies. In order to test this, bi-weekly monitoring of rainfall and discharge stable water isotope composition (oxygen-18 and deuterium) has been carried out since 2007 in 10 nested sub-catchments ranging in size from 0.4 to 247 km2 in the Attert catchment. MRT was estimated using different lumped convolution integral models and sine wave functions with varying transit times distributions (TTDs). TTDs were evaluated through calibration. Further research efforts will deal with the application of conceptual models to simulate and compare TTD, using

  13. Integrating K-means Clustering with Kernel Density Estimation for the Development of a Conditional Weather Generation Downscaling Model

    Science.gov (United States)

    Chen, Y.; Ho, C.; Chang, L.

    2011-12-01

    In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the

  14. Basin Visual Estimation Technique (BVET) and Representative Reach Approaches to Wadeable Stream Surveys: Methodological Limitations and Future Directions

    Science.gov (United States)

    Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor

    2004-01-01

    Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...

  15. Estimating Classification Errors under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC)

    NARCIS (Netherlands)

    Boeschoten, Laura; Oberski, Daniel; De Waal, Ton

    2017-01-01

    Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible

  16. An emperor penguin population estimate: the first global, synoptic survey of a species from space.

    Science.gov (United States)

    Fretwell, Peter T; Larue, Michelle A; Morin, Paul; Kooyman, Gerald L; Wienecke, Barbara; Ratcliffe, Norman; Fox, Adrian J; Fleming, Andrew H; Porter, Claire; Trathan, Phil N

    2012-01-01

    Our aim was to estimate the population of emperor penguins (Aptenodytes fosteri) using a single synoptic survey. We examined the whole continental coastline of Antarctica using a combination of medium resolution and Very High Resolution (VHR) satellite imagery to identify emperor penguin colony locations. Where colonies were identified, VHR imagery was obtained in the 2009 breeding season. The remotely-sensed images were then analysed using a supervised classification method to separate penguins from snow, shadow and guano. Actual counts of penguins from eleven ground truthing sites were used to convert these classified areas into numbers of penguins using a robust regression algorithm.We found four new colonies and confirmed the location of three previously suspected sites giving a total number of emperor penguin breeding colonies of 46. We estimated the breeding population of emperor penguins at each colony during 2009 and provide a population estimate of ~238,000 breeding pairs (compared with the last previously published count of 135,000-175,000 pairs). Based on published values of the relationship between breeders and non-breeders, this translates to a total population of ~595,000 adult birds.There is a growing consensus in the literature that global and regional emperor penguin populations will be affected by changing climate, a driver thought to be critical to their future survival. However, a complete understanding is severely limited by the lack of detailed knowledge about much of their ecology, and importantly a poor understanding of their total breeding population. To address the second of these issues, our work now provides a comprehensive estimate of the total breeding population that can be used in future population models and will provide a baseline for long-term research.

  17. SU-F-T-244: Radiotherapy Risk Estimation Based On Expert Group Survey

    Energy Technology Data Exchange (ETDEWEB)

    Koo, J; Yoon, M [Korea University, Seoul (Korea, Republic of); Chung, W; Chung, M; Kim, D [Kyung Hee University Hospital at Gangdong, Gangdong-gu, Seoul (Korea, Republic of)

    2016-06-15

    Purpose: To evaluate the reliability of RPN (Risk Priority Number) decided by expert group and to provide preliminary data for adapting FMEA in Korea. Methods: 1163 Incidents reported in ROSIS for 11 years were used as a real data to be compared with, and were categorized into 146 items. The questionnaire was composed of the 146 items and respondents had to valuate ‘occurrence (O)’, ‘severity (S)’, ‘detectability (D)’ of each item on a scale from 1 to 10 according to the proposed AAPM TG-100 rating scales. 19 medical physicists from 19 different organizations in Korea had participated in the survey. Because the number of ROSIS items was not evenly spread enough to be classified into 10 grades, 1–5 scale was chosen instead of 1–10 and survey result was also fit to 5 grades to compare. Results: The average O,S,D were 1.77, 3.50, 2.13, respectively and the item which had the highest RPN(32) was ‘patient movement during treatment’ in the survey. When comparing items ranked in the top 10 of each survey(O) and ROSIS database, two items were duplicated and ‘Simulation’ and ’Treatment’ were the most frequently ranked RT process in top 10 of survey and ROSIS each. The Chronbach α of each RT process were ranged from 0.74 to 0.99 and p-value was <0.001. When comparing O*D, the average difference was 1.4. Conclusion: This work indicates the deviation between actual risk and expectation. Considering that the respondents were Korean and ROSIS is mainly composed of incidents happened in European countries and some of the top 10 items of ROSIS cannot be applied in radiotherapy procedure in Korea, the deviation could have been came from procedural difference. Moreover, if expert group was consisted of experts from various parts, expectation might have been more accurate. Therefore, further research on radiotherapy risk estimation is needed.

  18. SU-F-T-244: Radiotherapy Risk Estimation Based On Expert Group Survey

    International Nuclear Information System (INIS)

    Koo, J; Yoon, M; Chung, W; Chung, M; Kim, D

    2016-01-01

    Purpose: To evaluate the reliability of RPN (Risk Priority Number) decided by expert group and to provide preliminary data for adapting FMEA in Korea. Methods: 1163 Incidents reported in ROSIS for 11 years were used as a real data to be compared with, and were categorized into 146 items. The questionnaire was composed of the 146 items and respondents had to valuate ‘occurrence (O)’, ‘severity (S)’, ‘detectability (D)’ of each item on a scale from 1 to 10 according to the proposed AAPM TG-100 rating scales. 19 medical physicists from 19 different organizations in Korea had participated in the survey. Because the number of ROSIS items was not evenly spread enough to be classified into 10 grades, 1–5 scale was chosen instead of 1–10 and survey result was also fit to 5 grades to compare. Results: The average O,S,D were 1.77, 3.50, 2.13, respectively and the item which had the highest RPN(32) was ‘patient movement during treatment’ in the survey. When comparing items ranked in the top 10 of each survey(O) and ROSIS database, two items were duplicated and ‘Simulation’ and ’Treatment’ were the most frequently ranked RT process in top 10 of survey and ROSIS each. The Chronbach α of each RT process were ranged from 0.74 to 0.99 and p-value was <0.001. When comparing O*D, the average difference was 1.4. Conclusion: This work indicates the deviation between actual risk and expectation. Considering that the respondents were Korean and ROSIS is mainly composed of incidents happened in European countries and some of the top 10 items of ROSIS cannot be applied in radiotherapy procedure in Korea, the deviation could have been came from procedural difference. Moreover, if expert group was consisted of experts from various parts, expectation might have been more accurate. Therefore, further research on radiotherapy risk estimation is needed.

  19. Sampling errors associated with soil composites used to estimate mean Ra-226 concentrations at an UMTRA remedial-action site

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Baker, K.R.; Nelson, R.A.; Miller, R.H.; Miller, M.L.

    1987-07-01

    The decision whether to take additional remedial action (removal of soil) from regions contaminated by uranium mill tailings involves collecting 20 plugs of soil from each 10-m by 10-m plot in the region and analyzing a 500-g portion of the mixed soil for 226 Ra. A soil sampling study was conducted in the windblown mill-tailings flood plain area at Shiprock, New Mexico, to evaluate whether reducing the number of soil plugs to 9 would have any appreciable impact on remedial-action decisions. The results of the Shiprock study are described and used in this paper to develop a simple model of the standard deviation of 226 Ra measurements on composite samples formed from 21 or fewer plugs. This model is used to predict as a function of the number of soil plugs per composite, the percent accuracy with which the mean 226 Ra concentration in surface soil can be estimated, and the probability of making incorrect remedial action decisions on the basis of statistical tests. 8 refs., 15 figs., 9 tabs

  20. Is the digestive gland of Mytilus galloprovincialis a tissue of choice for estimating cadmium exposure by means of metallothioneins?

    International Nuclear Information System (INIS)

    Raspor, Biserka; Dragun, Zrinka; Erk, Marijana; Ivankovic, Dusica; Pavicic, Jasenka

    2004-01-01

    A study performed over 12 months with caged mussels Mytilus galloprovincialis in the coastal marine zone, which is under urban pressure, reveals a temporal variation of digestive gland mass, which causes 'biological dilution' of cytosolic metallothionein (MT) and trace metal (Cd, Cu, Zn, Fe, Mn) concentrations. The dilution effect was corrected by expressing the cytosolic MT and metal concentrations as the tissue content. Consequently, the changes of the average digestive gland mass coincide with the changes of MT and trace metal contents. From February to June, MT contents are nearly twice and trace metal contents nearly three times higher than those of the other months. The period of increased average digestive gland mass, of MT and trace metal contents probably overlaps with the sexual maturation of mussels (gametogenesis) and enhanced food availability. Since natural factors contribute more to the MT content than the sublethal levels of Cd, the digestive gland of M. galloprovincialis is not considered as a tissue of choice for estimating Cd exposure by means of MTs

  1. How does subsurface retain and release stored water? An explicit estimation of young water fraction and mean transit time

    Science.gov (United States)

    Ameli, Ali; McDonnell, Jeffrey; Laudon, Hjalmar; Bishop, Kevin

    2017-04-01

    The stable isotopes of water have served science well as hydrological tracers which have demonstrated that there is often a large component of "old" water in stream runoff. It has been more problematic to define the full transit time distribution of that stream water. Non-linear mixing of previous precipitation signals that is stored for extended periods and slowly travel through the subsurface before reaching the stream results in a large range of possible transit times. It difficult to find tracers can represent this, especially if all that one has is data on the precipitation input and the stream runoff. In this paper, we explicitly characterize this "old water" displacement using a novel quasi-steady physically-based flow and transport model in the well-studied S-Transect hillslope in Sweden where the concentration of hydrological tracers in the subsurface and stream has been measured. We explore how subsurface conductivity profile impacts the characteristics of old water displacement, and then test these scenarios against the observed dynamics of conservative hydrological tracers in both the stream and subsurface. This work explores the efficiency of convolution-based approaches in the estimation of stream "young water" fraction and time-variant mean transit times. We also suggest how celerity and velocity differ with landscape structure

  2. Conversion factors for estimating release rate of gaseous radioactivity by an aerial survey

    International Nuclear Information System (INIS)

    Saito, Kimiaki; Moriuchi, Shigeru

    1988-02-01

    Conversion factors necessary for estimating release rate of gaseous radioactivity by an aerial survey are presented. The conversion factors were determined based on calculation assuming a Gaussian plume model as a function of atmospheric stability, down-wind distance and flight height. First, the conversion factors for plumes emitting mono-energy gamma rays were calculated, then, conversion factors were constructed through convolution for the radionuclides essential in an accident of a nuclear reactor, and for mixtures of these radionuclides considering elapsed time after shutdown. These conversion factors are shown in figures, and also polynomial expressions of the conversion factors as a function of height have been decided with the least-squares method. A user can easily obtain proper conversion factors from data shown here. (author)

  3. Comparison of administrative and survey data for estimating vitamin A supplementation and deworming coverage of children under five years of age in Sub-Saharan Africa.

    Science.gov (United States)

    Janmohamed, Amynah; Doledec, David

    2017-07-01

    To compare administrative coverage data with results from household coverage surveys for vitamin A supplementation (VAS) and deworming campaigns conducted during 2010-2015 in 12 African countries. Paired t-tests examined differences between administrative and survey coverage for 52 VAS and 34 deworming dyads. Independent t-tests measured VAS and deworming coverage differences between data sources for door-to-door and fixed-site delivery strategies and VAS coverage differences between 6- to 11-month and 12- to 59-month age group. For VAS, administrative coverage was higher than survey estimates in 47 of 52 (90%) campaign rounds, with a mean difference of 16.1% (95% CI: 9.5-22.7; P < 0.001). For deworming, administrative coverage exceeded survey estimates in 31 of 34 (91%) comparisons, with a mean difference of 29.8% (95% CI: 16.9-42.6; P < 0.001). Mean ± SD differences in coverage between administrative and survey data were 12.2% ± 22.5% for the door-to-door delivery strategy and 25.9% ± 24.7% for the fixed-site model (P = 0.06). For deworming, mean ± SD differences in coverage between data sources were 28.1% ± 43.5% and 33.1% ± 17.9% for door-to-door and fixed-site distribution, respectively (P = 0.64). VAS administrative coverage was higher than survey estimates in 37 of 49 (76%) comparisons for the 6- to 11-month age group and 45 of 48 (94%) comparisons for the 12- to 59-month age group. Reliance on health facility data alone for calculating VAS and deworming coverage may mask low coverage and prevent measures to improve programmes. Countries should periodically validate administrative coverage estimates with population-based methods. © 2017 John Wiley & Sons Ltd.

  4. The impact of the mode of survey administration on estimates of daily smoking for mobile phone only users

    Directory of Open Access Journals (Sweden)

    Joseph Hanna

    2017-04-01

    Full Text Available Abstract Background Over the past decade, there have been substantial changes in landline and mobile phone ownership, with a substantial increase in the proportion of mobile-only households. Estimates of daily smoking rates for the mobile phone only (MPO population have been found to be substantially higher than the rest of the population and telephone surveys that use a dual sampling frame (landline and mobile phones are now considered best practice. Smoking is seen as an undesirable behaviour; measuring such behaviours using an interviewer may lead to lower estimates when using telephone based surveys compared to self-administered approaches. This study aims to assess whether higher daily smoking estimates observed for the mobile phone only population can be explained by administrative features of surveys, after accounting for differences in the phone ownership population groups. Methods Data on New South Wales (NSW residents aged 18 years or older from the NSW Population Health Survey (PHS, a telephone survey, and the National Drug Strategy Household Survey (NDSHS, a self-administered survey, were combined, with weights adjusted to match the 2013 population. Design-adjusted prevalence estimates and odds ratios were calculated using survey analysis procedures available in SAS 9.4. Results Both the PHS and NDSHS gave the same estimates for daily smoking (12% and similar estimates for MPO users (20% and 18% respectively. Pooled data showed that daily smoking was 19% for MPO users, compared to 10% for dual phone owners, and 12% for landline phone only users. Prevalence estimates for MPO users across both surveys were consistently higher than other phone ownership groups. Differences in estimates for the MPO population compared to other phone ownership groups persisted even after adjustment for the mode of collection and demographic factors. Conclusions Daily smoking rates were consistently higher for the mobile phone only population and this was

  5. Estimation of lead, cadmium and nickel content by means of Atomic Absorption Spectroscopy in dry fruit bodies of some macromycetes growing in Poland. II.

    Directory of Open Access Journals (Sweden)

    Jan Grzybek

    2014-08-01

    Full Text Available The content of lead, cadmium, and nickel in dry fruit bodies of 34 species of macromyoetes collected in Poland from 72 natural babitats by means of Atomic Absorption Spectroscopy (AAS was estimated.

  6. Estimating survival probabilities by exposure levels: utilizing vital statistics and complex survey data with mortality follow-up.

    Science.gov (United States)

    Landsman, V; Lou, W Y W; Graubard, B I

    2015-05-20

    We present a two-step approach for estimating hazard rates and, consequently, survival probabilities, by levels of general categorical exposure. The resulting estimator utilizes three sources of data: vital statistics data and census data are used at the first step to estimate the overall hazard rate for a given combination of gender and age group, and cohort data constructed from a nationally representative complex survey with linked mortality records, are used at the second step to divide the overall hazard rate by exposure levels. We present an explicit expression for the resulting estimator and consider two methods for variance estimation that account for complex multistage sample design: (1) the leaving-one-out jackknife method, and (2) the Taylor linearization method, which provides an analytic formula for the variance estimator. The methods are illustrated with smoking and all-cause mortality data from the US National Health Interview Survey Linked Mortality Files, and the proposed estimator is compared with a previously studied crude hazard rate estimator that uses survey data only. The advantages of a two-step approach and possible extensions of the proposed estimator are discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  7. The estimation of geometry and motion of a surface from image sequences by means of linearisation of a paramatric model

    NARCIS (Netherlands)

    Korsten, Maarten J.; Houkes, Z.

    1990-01-01

    A method is given to estimate the geometry and motion of a moving body surface from image sequences. To this aim a parametric model of the surface is used, in order to reformulate the problem to one of parameter estimation. After linearization of the model standard linear estimation methods can be

  8. Multiple imputation to account for missing data in a survey: estimating the prevalence of osteoporosis.

    Science.gov (United States)

    Kmetic, Andrew; Joseph, Lawrence; Berger, Claudie; Tenenhouse, Alan

    2002-07-01

    Nonresponse bias is a concern in any epidemiologic survey in which a subset of selected individuals declines to participate. We reviewed multiple imputation, a widely applicable and easy to implement Bayesian methodology to adjust for nonresponse bias. To illustrate the method, we used data from the Canadian Multicentre Osteoporosis Study, a large cohort study of 9423 randomly selected Canadians, designed in part to estimate the prevalence of osteoporosis. Although subjects were randomly selected, only 42% of individuals who were contacted agreed to participate fully in the study. The study design included a brief questionnaire for those invitees who declined further participation in order to collect information on the major risk factors for osteoporosis. These risk factors (which included age, sex, previous fractures, family history of osteoporosis, and current smoking status) were then used to estimate the missing osteoporosis status for nonparticipants using multiple imputation. Both ignorable and nonignorable imputation models are considered. Our results suggest that selection bias in the study is of concern, but only slightly, in very elderly (age 80+ years), both women and men. Epidemiologists should consider using multiple imputation more often than is current practice.

  9. Survey of food radioactivity and estimation of internal dose from ingestion in China

    International Nuclear Information System (INIS)

    Zhang Jingyuan; Zhu Hongda; Han Peizhen

    1988-01-01

    In order to provide necessary bases for establishing 'Radionuclide Concentration Limits in Food stuffs', survey on radionuclide contents in Chinese food and estimation of internal dose from ingestion were carried out with the cooperation of 30 radiation protection establishments during the period 1982-1986. Activity concentrations in 14 categories (27 kinds) of Chinese food for 22 radionuclides were determined. In the light of three principal types of Chinese diet, food samples were collected from normal radiation background areas in 14 provinces or autonomous regions and three similarly elevated natural background areas. Annual intake by ingestion and resultant committed dose equivalents to general public for 15 radionuclides in these areas were estimated. In normal background areas the total annual intake of the 15 radionuclides by the public (adlut males) is about 4.2 x 10 4 Bq, and the resultant total committed dose equivalent is about 3.43 x 10 -4 Sv, but in two elevated natural background area the public annual intake and resulting committed dose equivalents for some natural radionulides are much higher than those in normal areas, while no obvious radiocontamination was discoveried relative contribution of each food category or each radionuclide to the total are discussed

  10. Computer Programs for Obtaining and Analyzing Daily Mean Steamflow Data from the U.S. Geological Survey National Water Information System Web Site

    Science.gov (United States)

    Granato, Gregory E.

    2009-01-01

    Research Council, 2004). The USGS maintains the National Water Information System (NWIS), a distributed network of computers and file servers used to store and retrieve hydrologic data (Mathey, 1998; U.S. Geological Survey, 2008). NWISWeb is an online version of this database that includes water data from more than 24,000 streamflow-gaging stations throughout the United States (U.S. Geological Survey, 2002, 2008). Information from NWISWeb is commonly used to characterize streamflows at gaged sites and to help predict streamflows at ungaged sites. Five computer programs were developed for obtaining and analyzing streamflow from the National Water Information System (NWISWeb). The programs were developed as part of a study by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, to develop a stochastic empirical loading and dilution model. The programs were developed because reliable, efficient, and repeatable methods are needed to access and process streamflow information and data. The first program is designed to facilitate the downloading and reformatting of NWISWeb streamflow data. The second program is designed to facilitate graphical analysis of streamflow data. The third program is designed to facilitate streamflow-record extension and augmentation to help develop long-term statistical estimates for sites with limited data. The fourth program is designed to facilitate statistical analysis of streamflow data. The fifth program is a preprocessor to create batch input files for the U.S. Environmental Protection Agency DFLOW3 program for calculating low-flow statistics. These computer programs were developed to facilitate the analysis of daily mean streamflow data for planning-level water-quality analyses but also are useful for many other applications pertaining to streamflow data and statistics. These programs and the associated documentation are included on the CD-ROM accompanying this report. This report and the appendixes on the

  11. Relationship between mean daily energy intake and frequency of consumption of out-of-home meals in the UK National Diet and Nutrition Survey.

    Science.gov (United States)

    Goffe, Louis; Rushton, Stephen; White, Martin; Adamson, Ashley; Adams, Jean

    2017-09-22

    Out-of-home meals have been characterised as delivering excessively large portions that can lead to high energy intake. Regular consumption is linked to weight gain and diet related diseases. Consumption of out-of-home meals is associated with socio-demographic and anthropometric factors, but the relationship between habitual consumption of such meals and mean daily energy intake has not been studied in both adults and children in the UK. We analysed adult and child data from waves 1-4 of the UK National Diet and Nutrition Survey using generalized linear modelling. We investigated whether individuals who report a higher habitual consumption of meals out in a restaurant or café, or takeaway meals at home had a higher mean daily energy intake, as estimated by a four-day food diary, whilst adjusting for key socio-demographic and anthropometric variables. Adults who ate meals out at least weekly had a higher mean daily energy intake consuming 75-104 kcal more per day than those who ate these meals rarely. The equivalent figures for takeaway meals at home were 63-87 kcal. There was no association between energy intake and frequency of consumption of meals out in children. Children who ate takeaway meals at home at least weekly consumed 55-168 kcal more per day than those who ate these meals rarely. Additionally, in children, there was an interaction with socio-economic position, where greater frequency of consumption of takeaway meals was associated with higher mean daily energy intake in those from less affluent households than those from more affluent households. Higher habitual consumption of out-of-home meals is associated with greater mean daily energy intake in the UK. More frequent takeaway meal consumption in adults and children is associated with greater daily energy intake and this effect is greater in children from less affluent households. Interventions seeking to reduce energy content through reformulation or reduction of portion sizes in restaurants

  12. Estimating the mean and variance of measurements from serial radioactive decay schemes with emphasis on 222Rn and its short-lived progeny

    International Nuclear Information System (INIS)

    Inkret, W.C.; Borak, T.B.; Boes, D.C.

    1990-01-01

    Classically, the mean and variance of radioactivity measurements are estimated from poisson distributions. However, the random distribution of observed events is not poisson when the half-life is short compared with the interval of observation or when more than one event can be associated with a single initial atom. Procedures were developed to estimate the mean and variance of single measurements of serial radioactive processes. Results revealed that observations from the three consecutive alpha emissions beginning with 222 Rn are positively correlated. Since the poisson estimator ignores covariance terms, it underestimates the true variance of the measurement. The reverse is true for mixtures of radon daughters only. (author)

  13. Salton Trough regional deformation estimated from combined trilateration and survey-mode GPS data

    Science.gov (United States)

    Anderson, G.; Agnew, D.C.; Johnson, H.O.

    2003-01-01

    The Salton Trough in southeastern California, United States, has one of the highest seismicity and deformation rates in southern California, including 20 earthquakes M 6 or larger since 1892. From 1972 through 1987, the U.S. Geological Survey (USGS) measured a 41-station trilateration network in this region. We remeasured 37 of the USGS baselines using survey-mode Global Positioning System methods from 1995 through 1999. We estimate the Salton Trough deformation field over a nearly 30-year period through combined analysis of baseline length time series from these two datasets. Our primary result is that strain accumulation has been steady over our observation span, at a resolution of about 0.05 ??strain/yr at 95% confidence, with no evidence for significant long-term strain transients despite the occurrence of seven large regional earthquakes during our observation period. Similar to earlier studies, we find that the regional strain field is consistent with 0.5 ?? 0.03 ??strain/yr total engineering shear strain along an axis oriented 311.6?? ?? 23?? east of north, approximately parallel to the strike of the major regional faults, the San Andreas and San Jacinto (all uncertainties in the text and tables are standard deviations unless otherwise noted). We also find that (1) the shear strain rate near the San Jacinto fault is at least as high as it is near the San Andreas fault, (2) the areal dilatation near the southeastern Salton Sea is significant, and (3) one station near the southeastern Salton Sea moved anomalously during the period 1987.95-1995.11.

  14. Contraceptive failure rates: new estimates from the 1995 National Survey of Family Growth.

    Science.gov (United States)

    Fu, H; Darroch, J E; Haas, T; Ranjit, N

    1999-01-01

    Unintended pregnancy remains a major public health concern in the United States. Information on pregnancy rates among contraceptive users is needed to guide medical professionals' recommendations and individuals' choices of contraceptive methods. Data were taken from the 1995 National Survey of Family Growth (NSFG) and the 1994-1995 Abortion Patient Survey (APS). Hazards models were used to estimate method-specific contraceptive failure rates during the first six months and during the first year of contraceptive use for all U.S. women. In addition, rates were corrected to take into account the underreporting of induced abortion in the NSFG. Corrected 12-month failure rates were also estimated for subgroups of women by age, union status, poverty level, race or ethnicity, and religion. When contraceptive methods are ranked by effectiveness over the first 12 months of use (corrected for abortion underreporting), the implant and injectables have the lowest failure rates (2-3%), followed by the pill (8%), the diaphragm and the cervical cap (12%), the male condom (14%), periodic abstinence (21%), withdrawal (24%) and spermicides (26%). In general, failure rates are highest among cohabiting and other unmarried women, among those with an annual family income below 200% of the federal poverty level, among black and Hispanic women, among adolescents and among women in their 20s. For example, adolescent women who are not married but are cohabiting experience a failure rate of about 31% in the first year of contraceptive use, while the 12-month failure rate among married women aged 30 and older is only 7%. Black women have a contraceptive failure rate of about 19%, and this rate does not vary by family income; in contrast, overall 12-month rates are lower among Hispanic women (15%) and white women (10%), but vary by income, with poorer women having substantially greater failure rates than more affluent women. Levels of contraceptive failure vary widely by method, as well as by

  15. Photometric redshifts for the next generation of deep radio continuum surveys - II. Gaussian processes and hybrid estimates

    Science.gov (United States)

    Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.

    2018-04-01

    Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared, X-ray and optically selected AGN - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGN are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole co-evolution and for cosmological studies.

  16. The Impact of Survey and Response Modes on Current Smoking Prevalence Estimates Using TUS-CPS: 1992-2003

    Directory of Open Access Journals (Sweden)

    Julia Soulakova

    2009-12-01

    Full Text Available This study identified whether survey administration mode (telephone or in-person and respondent type (self or proxy result in discrepant prevalence of current smoking in the adult U.S. population, while controlling for key sociodemographic characteristics and longitudinal changes of smoking prevalence over the 11-year period from 1992-2003. We used a multiple logistic regression analysis with replicate weights to model the current smoking status logit as a function of a number of covariates. The final model included individual- and family-level sociodemographic characteristics, survey attributes, and multiple two-way interactions of survey mode and respondent type with other covariates. The respondent type is a significant predictor of current smoking prevalence and the magnitude of the difference depends on the age, sex, and education of the person whose smoking status is being reported. Furthermore, the survey mode has significant interactions with survey year, sex, and age. We conclude that using an overall unadjusted estimate of the current smoking prevalence may result in underestimating the current smoking rate when conducting proxy or telephone interviews especially for some sub-populations, such as young adults. We propose that estimates could be improved if more detailed information regarding the respondent type and survey administration mode characteristics were considered in addition to commonly used survey year and sociodemographic characteristics. This information is critical given that future surveillance is moving toward more complex designs. Thus, adjustment of estimates should be contemplated when comparing current smoking prevalence results within a given survey series with major changes in methodology over time and between different surveys using various modes and respondent types.

  17. Estimating the size of illicit tobacco consumption in Brazil: findings from the global adult tobacco survey.

    Science.gov (United States)

    Iglesias, Roberto Magno; Szklo, André Salem; Souza, Mirian Carvalho de; de Almeida, Liz Maria

    2017-01-01

    Brazil experienced a large decline in smoking prevalence between 2008 and 2013. Tax rate increases since 2007 and a new tobacco tax structure in 2012 may have played an important role in this decline. However, continuous tax rate increases pushed up cigarette prices over personal income growth and, therefore, some consumers, especially lower income individuals, may have migrated to cheaper illicit cigarettes. To use tobacco surveillance data to estimate the size of illicit tobacco consumption before and after excise tax increases. We defined a threshold price and compared it with purchasing prices obtained from two representative surveys conducted in 2008 and 2013 to estimate the proportion of illicit cigarette use among daily smokers. Generalised linear model was specified to understand whether the absolute difference in proportions over time differed by sociodemographic groups and consumption levels. Our findings were validated using an alternative method. Total proportion of illicit daily consumption increased from 16.6% to 31.1% between 2008 and 2013. We observed a pattern of unadjusted absolute decreases in cigarette smoking prevalence and increases in the proportion of illicit consumption, irrespective of gender, age, educational level, area of residence and amount of cigarettes consumed. The strategy of raising taxes has increased government revenues, reduced smoking prevalence and resulted in an increased illicit trade. Surveillance data can be used to provide information on illicit tobacco trade to help in the implementation of WHO Framework Convention on Tobacco Control (FCTC) article 15 and the FCTC Protocol to Eliminate Illicit Trade in Tobacco Products. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. Accessing camera trap survey feasibility for estimating Blastocerus dichotomus (Cetartiodactyla, Cervidae demographic parameters

    Directory of Open Access Journals (Sweden)

    Pedro Henrique F. Peres

    2017-11-01

    Full Text Available ABSTRACT Demographic information is the basis for evaluating and planning conservation strategies for an endangered species. However, in numerous situations there are methodological or financial limitations to obtain such information for some species. The marsh deer, an endangered Neotropical cervid, is a challenging species to obtain biological information. To help achieve such aims, the study evaluated the applicability of camera traps to obtain demographic information on the marsh deer compared to the traditional aerial census method. Fourteen camera traps were installed for three months on the Capão da Cruz floodplain, in state of São Paulo, and ten helicopter flyovers were made along a 13-kilometer trajectory to detect resident marsh deer. In addition to counting deer, the study aimed to identify the sex, age group and individual identification of the antlered males recorded. Population estimates were performed using the capture-mark-recapture method with the camera trap data and by the distance sampling method for aerial observation data. The costs and field efforts expended for both methodologies were calculated and compared. Twenty independent photographic records and 42 sightings were obtained and generated estimates of 0.98 and 1.06 ind/km², respectively. In contrast to the aerial census, camera traps allowed us to individually identify branch-antlered males, determine the sex ratio and detect fawns in the population. The cost of camera traps was 78% lower but required 20 times more field effort. Our analysis indicates that camera traps present a superior cost-benefit ratio compared to aerial surveys, since they are more informative, cheaper and offer simpler logistics. Their application extends the possibilities of studying a greater number of populations in a long-term monitoring.

  19. Use of recent geoid models to estimate mean dynamic topography and geostrophic currents in South Atlantic and Brazil Malvinas confluence

    Directory of Open Access Journals (Sweden)

    Alexandre Bernardino Lopes

    2012-03-01

    Full Text Available The use of geoid models to estimate the Mean Dynamic Topography was stimulated with the launching of the GRACE satellite system, since its models present unprecedented precision and space-time resolution. In the present study, besides the DNSC08 mean sea level model, the following geoid models were used with the objective of computing the MDTs: EGM96, EIGEN-5C and EGM2008. In the method adopted, geostrophic currents for the South Atlantic were computed based on the MDTs. In this study it was found that the degree and order of the geoid models affect the determination of TDM and currents directly. The presence of noise in the MDT requires the use of efficient filtering techniques, such as the filter based on Singular Spectrum Analysis, which presents significant advantages in relation to conventional filters. Geostrophic currents resulting from geoid models were compared with the HYCOM hydrodynamic numerical model. In conclusion, results show that MDTs and respective geostrophic currents calculated with EIGEN-5C and EGM2008 models are similar to the results of the numerical model, especially regarding the main large scale features such as boundary currents and the retroflection at the Brazil-Malvinas Confluence.A utilização de modelos geoidais na determinação da Topografia Dinâmica Média foi impulsionada com o lançamento dos satélites do sistema GRACE, já que seus modelos apresentam precisão e resolução espacial e temporal sem precedentes. No presente trabalho, além do modelo de nível médio do mar DNSC08, foram utilizados os seguintes modelos geoidais com o objetivo de calcular as TDMs: EGM96, EIGEN-5C e EGM2008. No método adotado, foram calculadas as respectivas correntes geostróficas para o Atlântico Sul a partir das TDMs. O grau e ordem dos modelos geoidais influenciam diretamente na determinação da TDM e correntes. Neste trabalho verificou-se que presença de ruídos da TDM requer a utilização de técnicas de filtragem

  20. [Estimating child mortality using the previous child technique, with data from health centers and household surveys: methodological aspects].

    Science.gov (United States)

    Aguirre, A; Hill, A G

    1988-01-01

    2 trials of the previous child or preceding birth technique in Bamako, Mali, and Lima, Peru, gave very promising results for measurement of infant and early child mortality using data on survivorship of the 2 most recent births. In the Peruvian study, another technique was tested in which each woman was asked about her last 3 births. The preceding birth technique described by Brass and Macrae has rapidly been adopted as a simple means of estimating recent trends in early childhood mortality. The questions formulated and the analysis of results are direct when the mothers are visited at the time of birth or soon after. Several technical aspects of the method believed to introduce unforeseen biases have now been studied and found to be relatively unimportant. But the problems arising when the data come from a nonrepresentative fraction of the total fertile-aged population have not been resolved. The analysis based on data from 5 maternity centers including 1 hospital in Bamako, Mali, indicated some practical problems and the information obtained showed the kinds of subtle biases that can result from the effects of selection. The study in Lima tested 2 abbreviated methods for obtaining recent early childhood mortality estimates in countries with deficient vital registration. The basic idea was that a few simple questions added to household surveys on immunization or diarrheal disease control for example could produce improved child mortality estimates. The mortality estimates in Peru were based on 2 distinct sources of information in the questionnaire. All women were asked their total number of live born children and the number still alive at the time of the interview. The proportion of deaths was converted into a measure of child survival using a life table. Then each woman was asked for a brief history of the 3 most recent live births. Dates of birth and death were noted in month and year of occurrence. The interviews took only slightly longer than the basic survey

  1. ROV advanced magnetic survey for revealing archaeological targets and estimating medium magnetization

    Science.gov (United States)

    Eppelbaum, Lev

    2013-04-01

    Magnetic survey is one of most applied geophysical method for searching and localization of any objects with contrast magnetic properties (for instance, in Israel detailed magneric survey has been succesfully applied at more than 60 archaeological sites (Eppelbaum, 2010, 2011; Eppelbaum et al., 2011, 2010)). However, land magnetic survey at comparatively large archaeological sites (with observation grids 0.5 x 0.5 or 1 x 1 m) may occupy 5-10 days. At the same time the new Remote Operation Vehicle (ROV) generation - small and maneuvering vehicles - can fly at levels of few (and even one) meters over the earth's surface (flowing the relief forms or straight). Such ROV with precise magnetic field measurements (with a frequency of 20-25 observations per second) may be performed during 10-30 minutes, moreover at different levels over the earth's surface. Such geophysical investigations should have an extremely low exploitation cost. Finally, measurements of geophysical fields at different observation levels could provide new unique geophysical-archaeological information (Eppelbaum, 2005; Eppelbaum and Mishne, 2011). The developed interpretation methodology for magnetic anomalies advanced analysis (Khesin et al., 1996; Eppelbaum et al., 2001; Eppelbaum et al., 2011) may be successfully applied for ROV magnetic survey for delineation of archaeological objects and estimation averaged magnetization of geological medium. This methodology includes: (1) non-conventional procedure for elimination of secondary effect of magnetic temporary variations, (2) calculation of rugged relief influence by the use of a correlation method, (3) estimation of medium magnetization, (4) application of various informational and wavelet algorithms for revealing low anomalous effects against the strong noise background, (5) advanced procedures for magnetic anomalies quantitative analysis (they are applicable in conditions of rugged relief, inclined magnetization, and an unknown level of the total

  2. Estimating flood discharge using witness movies in post-flood hydrological surveys

    Science.gov (United States)

    Le Coz, Jérôme; Hauet, Alexandre; Le Boursicaud, Raphaël; Pénard, Lionel; Bonnifait, Laurent; Dramais, Guillaume; Thollet, Fabien; Braud, Isabelle

    2015-04-01

    The estimation of streamflow rates based on post-flood surveys is of paramount importance for the investigation of extreme hydrological events. Major uncertainties usually arise from the absence of information on the flow velocities and from the limited spatio-temporal resolution of such surveys. Nowadays, after each flood occuring in populated areas home movies taken from bridges, river banks or even drones are shared by witnesses through Internet platforms like YouTube. Provided that some topography data and additional information are collected, image-based velocimetry techniques can be applied to some of these movie materials, in order to estimate flood discharges. As a contribution to recent post-flood surveys conducted in France, we developed and applied a method for estimating velocities and discharges based on the Large Scale Particle Image Velocimetry (LSPIV) technique. Since the seminal work of Fujita et al. (1998), LSPIV applications to river flows were reported by a number of authors and LSPIV can now be considered a mature technique. However, its application to non-professional movies taken by flood witnesses remains challenging and required some practical developments. The different steps to apply LSPIV analysis to a flood home movie are as follows: (i) select a video of interest; (ii) contact the author for agreement and extra information; (iii) conduct a field topography campaign to georeference Ground Control Points (GCPs), water level and cross-sectional profiles; (iv) preprocess the video before LSPIV analysis: correct lens distortion, align the images, etc.; (v) orthorectify the images to correct perspective effects and know the physical size of pixels; (vi) proceed with the LSPIV analysis to compute the surface velocity field; and (vii) compute discharge according to a user-defined velocity coefficient. Two case studies in French mountainous rivers during extreme floods are presented. The movies were collected on YouTube and field topography

  3. Hospital and clinic survey estimates of medical x-ray exposures in Hiroshima and Nagasaki, (1)

    International Nuclear Information System (INIS)

    Sawada, Shozo; Land, C.E.; Otake, Masanori; Russell, W.J.; Takeshita, Kenji.

    1980-11-01

    All large hospitals and 40% of the small hospitals and clinics in Hiroshima and Nagasaki cities were surveyed for the X-ray examinations they performed during a 2-week period in 1974. The frequency and type of X-ray examinations received by members of the RERF Adult Health Study (AHS) and the RERF Life Span Study (LSS) extended, excluding AHS (Non-AHS), were compared with the general population in each city. Radiologic exposures of patients at hospitals and clinics were most frequent among the general populations. The number of patients, examinations, and exposures per caput per year in each population were estimated. Since the age distribution differed among the three populations, comparisons were made only after correcting for age. On a per caput per year basis exposure frequency was relatively high in the AHS and low in the general populations, a reflection of the greater number of patients in the AHS than in the general populations. Non-AHS males in Nagasaki had a higher X-ray examination rate than did the AHS subjects. The others in the Non-AHS did not differ appreciably from the general populations. There was no difference among these groups according to body sites examined. (author)

  4. Estimating the robustness of contingenet valuation estimates of WTP to survey mode and treatment of protest responses.

    Science.gov (United States)

    John Loomis; Armando Gonzalez-Caban; Joseph Champ

    2011-01-01

    Over the past four decades teh contingent valuation method (CVM) has become a technique frequently used by economists to estimate willingness-to-pay (WTP) for improvements in environmental quality and prot3tion of natural resources. The CVM was originall applied to estmate recreation use values (Davis, 1963; Hammack and Brown, 1974)and air quality (Brookshire et al....

  5. Longitudinal Weight Calibration with Estimated Control Totals for Cross Sectional Survey Data: Theory and Application

    Science.gov (United States)

    Qing, Siyu

    2014-01-01

    The National Science Foundation (NSF) Survey of Doctorate Recipients (SDR) collects information on a sample of individuals in the United States with PhD degrees. A significant portion of the sampled individuals appear in multiple survey years and can be linked across time. Survey weights in each year are created and adjusted for oversampling and…

  6. Estimating disease prevalence from two-phase surveys with non-response at the second phase

    Science.gov (United States)

    Gao, Sujuan; Hui, Siu L.; Hall, Kathleen S.; Hendrie, Hugh C.

    2010-01-01

    SUMMARY In this paper we compare several methods for estimating population disease prevalence from data collected by two-phase sampling when there is non-response at the second phase. The traditional weighting type estimator requires the missing completely at random assumption and may yield biased estimates if the assumption does not hold. We review two approaches and propose one new approach to adjust for non-response assuming that the non-response depends on a set of covariates collected at the first phase: an adjusted weighting type estimator using estimated response probability from a response model; a modelling type estimator using predicted disease probability from a disease model; and a regression type estimator combining the adjusted weighting type estimator and the modelling type estimator. These estimators are illustrated using data from an Alzheimer’s disease study in two populations. Simulation results are presented to investigate the performances of the proposed estimators under various situations. PMID:10931514

  7. Multimorbidity in Australia: Comparing estimates derived using administrative data sources and survey data.

    Directory of Open Access Journals (Sweden)

    Sanja Lujic

    Full Text Available Estimating multimorbidity (presence of two or more chronic conditions using administrative data is becoming increasingly common. We investigated (1 the concordance of identification of chronic conditions and multimorbidity using self-report survey and administrative datasets; (2 characteristics of people with multimorbidity ascertained using different data sources; and (3 whether the same individuals are classified as multimorbid using different data sources.Baseline survey data for 90,352 participants of the 45 and Up Study-a cohort study of residents of New South Wales, Australia, aged 45 years and over-were linked to prior two-year pharmaceutical claims and hospital admission records. Concordance of eight self-report chronic conditions (reference with claims and hospital data were examined using sensitivity (Sn, positive predictive value (PPV, and kappa (κ.The characteristics of people classified as multimorbid were compared using logistic regression modelling.Agreement was found to be highest for diabetes in both hospital and claims data (κ = 0.79, 0.78; Sn = 79%, 72%; PPV = 86%, 90%. The prevalence of multimorbidity was highest using self-report data (37.4%, followed by claims data (36.1% and hospital data (19.3%. Combining all three datasets identified a total of 46 683 (52% people with multimorbidity, with half of these identified using a single dataset only, and up to 20% identified on all three datasets. Characteristics of persons with and without multimorbidity were generally similar. However, the age gradient was more pronounced and people speaking a language other than English at home were more likely to be identified as multimorbid by administrative data.Different individuals, with different combinations of conditions, are identified as multimorbid when different data sources are used. As such, caution should be applied when ascertaining morbidity from a single data source as the agreement between self-report and administrative

  8. Multimorbidity in Australia: Comparing estimates derived using administrative data sources and survey data.

    Science.gov (United States)

    Lujic, Sanja; Simpson, Judy M; Zwar, Nicholas; Hosseinzadeh, Hassan; Jorm, Louisa

    2017-01-01

    Estimating multimorbidity (presence of two or more chronic conditions) using administrative data is becoming increasingly common. We investigated (1) the concordance of identification of chronic conditions and multimorbidity using self-report survey and administrative datasets; (2) characteristics of people with multimorbidity ascertained using different data sources; and (3) whether the same individuals are classified as multimorbid using different data sources. Baseline survey data for 90,352 participants of the 45 and Up Study-a cohort study of residents of New South Wales, Australia, aged 45 years and over-were linked to prior two-year pharmaceutical claims and hospital admission records. Concordance of eight self-report chronic conditions (reference) with claims and hospital data were examined using sensitivity (Sn), positive predictive value (PPV), and kappa (κ).The characteristics of people classified as multimorbid were compared using logistic regression modelling. Agreement was found to be highest for diabetes in both hospital and claims data (κ = 0.79, 0.78; Sn = 79%, 72%; PPV = 86%, 90%). The prevalence of multimorbidity was highest using self-report data (37.4%), followed by claims data (36.1%) and hospital data (19.3%). Combining all three datasets identified a total of 46 683 (52%) people with multimorbidity, with half of these identified using a single dataset only, and up to 20% identified on all three datasets. Characteristics of persons with and without multimorbidity were generally similar. However, the age gradient was more pronounced and people speaking a language other than English at home were more likely to be identified as multimorbid by administrative data. Different individuals, with different combinations of conditions, are identified as multimorbid when different data sources are used. As such, caution should be applied when ascertaining morbidity from a single data source as the agreement between self-report and administrative data

  9. Comparison of Paper-and-Pencil versus Web Administration of the Youth Risk Behavior Survey (YRBS): Risk Behavior Prevalence Estimates

    Science.gov (United States)

    Eaton, Danice K.; Brener, Nancy D.; Kann, Laura; Denniston, Maxine M.; McManus, Tim; Kyle, Tonja M.; Roberts, Alice M.; Flint, Katherine H.; Ross, James G.

    2010-01-01

    The authors examined whether paper-and-pencil and Web surveys administered in the school setting yield equivalent risk behavior prevalence estimates. Data were from a methods study conducted by the Centers for Disease Control and Prevention (CDC) in spring 2008. Intact classes of 9th- or 10th-grade students were assigned randomly to complete a…

  10. Violence and Drug Use in Rural Teens: National Prevalence Estimates from the 2003 Youth Risk Behavior Survey

    Science.gov (United States)

    Johnson, Andrew O.; Mink, Michael D.; Harun, Nusrat; Moore, Charity G.; Martin, Amy B.; Bennett, Kevin J.

    2008-01-01

    Objectives: The purpose of this study was to compare national estimates of drug use and exposure to violence between rural and urban teens. Methods: Twenty-eight dependent variables from the 2003 Youth Risk Behavior Survey were used to compare violent activities, victimization, suicidal behavior, tobacco use, alcohol use, and illegal drug use…

  11. Economic Impact of Childhood Psychiatric Disorder on Public Sector Services in Britain: Estimates from National Survey Data

    Science.gov (United States)

    Snell, Tom; Knapp, Martin; Healey, Andrew; Guglani, Sacha; Evans-Lacko, Sara; Fernandez, Jose-Luis; Meltzer, Howard; Ford, Tamsin

    2013-01-01

    Background: Approximately one in ten children aged 5-15 in Britain has a conduct, hyperactivity or emotional disorder. Methods: The British Child and Adolescent Mental Health Surveys (BCAMHS) identified children aged 5-15 with a psychiatric disorder, and their use of health, education and social care services. Service costs were estimated for each…

  12. Methods for estimating private forest ownership statistics: revised methods for the USDA Forest Service's National Woodland Owner Survey

    Science.gov (United States)

    Brenton J. ​Dickinson; Brett J. Butler

    2013-01-01

    The USDA Forest Service's National Woodland Owner Survey (NWOS) is conducted to better understand the attitudes and behaviors of private forest ownerships, which control more than half of US forestland. Inferences about the populations of interest should be based on theoretically sound estimation procedures. A recent review of the procedures disclosed an error in...

  13. Using interview-based recall surveys to estimate cod Gadus morhua and eel Anguilla anguilla harvest in Danish recreational fishing

    DEFF Research Database (Denmark)

    Sparrevohn, Claus Reedtz; Storr-Paulsen, Marie

    2012-01-01

    Using interview-based recall surveys to estimate cod Gadus morhua and eel Anguilla anguilla harvest in Danish recreational fishing. – ICES Journal of Marine Science, 69: 323–330.Marine recreational fishing is a popular outdoor activity in Denmark, practised by both anglers and passive gear fishers....... However, the impact on the targeted stocks is unknown, so to estimate the 2009 harvest of cod Gadus morhua and eel Anguilla anguilla, two separate interview-based surveys were initiated and carried out in 2009/2010. The first recall survey exclusively targeted fishers who had been issued......, in certain areas, the recreational harvest of cod accounted for more than 30% of the total yield. The majority (81%) of the recreational cod harvest was taken by anglers. Eels, however, are almost exclusively caught with passive gear (fykenets) and a total of 104 t year−1 was harvested, which corresponds...

  14. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    Science.gov (United States)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  15. Estimating disability prevalence among adults by body mass index: 2003-2009 National Health Interview Survey.

    Science.gov (United States)

    Armour, Brian S; Courtney-Long, Elizabeth; Campbell, Vincent A; Wethington, Holly R

    2012-01-01

    Obesity is associated with adverse health outcomes in people with and without disabilities; however, little is known about disability prevalence among people who are obese. The purpose of this study was to determine the prevalence and type of disability among obese adults in the United States. We analyzed pooled data from sample adult modules of the 2003-2009 National Health Interview Survey (NHIS) to obtain national prevalence estimates of disability, disability type, and obesity by using 30 questions that screened for activity limitations, vision and hearing impairment, and cognitive, movement, and emotional difficulties. We stratified disability prevalence by category of body mass index (BMI, measured as kg/m(2)): underweight, less than 18.5; normal weight, 18.5 to 24.9; overweight, 25.0 to 29.9; and obese, 30.0 or higher. Among the 25.3% of adult men and 24.6% of women in our pooled sample who were obese, 35.2% and 46.9%, respectively, reported a disability. In contrast, 26.7% of men and 26.8% women of normal weight reported a disability. Disability was much higher among obese women than among obese men (46.9% vs 35.2%, P < .001). Movement difficulties were the most common disabilities among obese men and women, affecting 25.3% of men and 37.9% of women. This research contributes to the literature on obesity by including disability as a demographic in characterizing people by body mass index. Because of the high prevalence of disability among those who are obese, public health programs should consider the needs of those with disabilities when designing obesity prevention and treatment programs.

  16. Prevalence of Multiple Chronic Conditions Among US Adults: Estimates From the National Health Interview Survey, 2010

    Science.gov (United States)

    Schiller, Jeannine S.

    2013-01-01

    Preventing and ameliorating chronic conditions has long been a priority in the United States; however, the increasing recognition that people often have multiple chronic conditions (MCC) has added a layer of complexity with which to contend. The objective of this study was to present the prevalence of MCC and the most common MCC dyads/triads by selected demographic characteristics. We used respondent-reported data from the 2010 National Health Interview Survey (NHIS) to study the US adult civilian noninstitutionalized population aged 18 years or older (n = 27,157). We categorized adults as having 0 to 1, 2 to 3, or 4 or more of the following chronic conditions: hypertension, coronary heart disease, stroke, diabetes, cancer, arthritis, hepatitis, weak or failing kidneys, chronic obstructive pulmonary disease, or current asthma. We then generated descriptive estimates and tested for significant differences. Twenty-six percent of adults have MCC; the prevalence of MCC has increased from 21.8% in 2001 to 26.0% in 2010. The prevalence of MCC significantly increased with age, was significantly higher among women than men and among non-Hispanic white and non-Hispanic black adults than Hispanic adults. The most common dyad identified was arthritis and hypertension, and the combination of arthritis, hypertension, and diabetes was the most common triad. The findings of this study contribute information to the field of MCC research. The NHIS can be used to identify population subgroups most likely to have MCC and potentially lead to clinical guidelines for people with more common MCC combinations. PMID:23618545

  17. Estimation of Geographic Variation in Human Papillomavirus Vaccine Uptake in Men and Women: An Online Survey Using Facebook Recruitment

    Science.gov (United States)

    Hughes, John; Oakes, J Michael; Pankow, James S; Kulasingam, Shalini L

    2014-01-01

    Background Federally funded surveys of human papillomavirus (HPV) vaccine uptake are important for pinpointing geographically based health disparities. Although national and state level data are available, local (ie, county and postal code level) data are not due to small sample sizes, confidentiality concerns, and cost. Local level HPV vaccine uptake data may be feasible to obtain by targeting specific geographic areas through social media advertising and recruitment strategies, in combination with online surveys. Objective Our goal was to use Facebook-based recruitment and online surveys to estimate local variation in HPV vaccine uptake among young men and women in Minnesota. Methods From November 2012 to January 2013, men and women were recruited via a targeted Facebook advertisement campaign to complete an online survey about HPV vaccination practices. The Facebook advertisements were targeted to recruit men and women by location (25 mile radius of Minneapolis, Minnesota, United States), age (18-30 years), and language (English). Results Of the 2079 men and women who responded to the Facebook advertisements and visited the study website, 1003 (48.2%) enrolled in the study and completed the survey. The average advertising cost per completed survey was US $1.36. Among those who reported their postal code, 90.6% (881/972) of the participants lived within the previously defined geographic study area. Receipt of 1 dose or more of HPV vaccine was reported by 65.6% women (351/535), and 13.0% (45/347) of men. These results differ from previously reported Minnesota state level estimates (53.8% for young women and 20.8% for young men) and from national estimates (34.5% for women and 2.3% for men). Conclusions This study shows that recruiting a representative sample of young men and women based on county and postal code location to complete a survey on HPV vaccination uptake via the Internet is a cost-effective and feasible strategy. This study also highlights the need

  18. Estimation of geographic variation in human papillomavirus vaccine uptake in men and women: an online survey using facebook recruitment.

    Science.gov (United States)

    Nelson, Erik J; Hughes, John; Oakes, J Michael; Pankow, James S; Kulasingam, Shalini L

    2014-09-01

    Federally funded surveys of human papillomavirus (HPV) vaccine uptake are important for pinpointing geographically based health disparities. Although national and state level data are available, local (ie, county and postal code level) data are not due to small sample sizes, confidentiality concerns, and cost. Local level HPV vaccine uptake data may be feasible to obtain by targeting specific geographic areas through social media advertising and recruitment strategies, in combination with online surveys. Our goal was to use Facebook-based recruitment and online surveys to estimate local variation in HPV vaccine uptake among young men and women in Minnesota. From November 2012 to January 2013, men and women were recruited via a targeted Facebook advertisement campaign to complete an online survey about HPV vaccination practices. The Facebook advertisements were targeted to recruit men and women by location (25 mile radius of Minneapolis, Minnesota, United States), age (18-30 years), and language (English). Of the 2079 men and women who responded to the Facebook advertisements and visited the study website, 1003 (48.2%) enrolled in the study and completed the survey. The average advertising cost per completed survey was US $1.36. Among those who reported their postal code, 90.6% (881/972) of the participants lived within the previously defined geographic study area. Receipt of 1 dose or more of HPV vaccine was reported by 65.6% women (351/535), and 13.0% (45/347) of men. These results differ from previously reported Minnesota state level estimates (53.8% for young women and 20.8% for young men) and from national estimates (34.5% for women and 2.3% for men). This study shows that recruiting a representative sample of young men and women based on county and postal code location to complete a survey on HPV vaccination uptake via the Internet is a cost-effective and feasible strategy. This study also highlights the need for local estimates to assess the variation in HPV

  19. Objective malignancy grading of squamous cell carcinoma of the lung. Stereologic estimates of mean nuclear size are of prognostic value, independent of clinical stage of disease

    DEFF Research Database (Denmark)

    Ladekarl, M; Bæk-Hansen, T; Henrik-Nielsen, R

    1995-01-01

    a projection microscope and a simple test system in fields of vision systematically selected from the whole tumor area of one routine section, five quantitative histopathologic variables were estimated: the mean nuclear volume, the mean nuclear profile area, the density of nuclear profiles, the volume fraction...... of nuclei to tissue, and the number of mitotic profiles per 10(3) nuclear profiles. For each patient, information was recorded regarding sex, age at diagnosis, and clinical stage of disease.RESULTS: Single-factor analyses showed that a favorable prognosis was associated with early clinical stages (Stages I...... and II) and young age (P stage, age, and mean nuclear...

  20. Objective malignancy grading of squamous cell carcinoma of the lung. Stereologic estimates of mean nuclear size are of prognostic value, independent of clinical stage of disease

    DEFF Research Database (Denmark)

    Ladekarl, M; Bæk-Hansen, T; Henrik-Nielsen, R

    1995-01-01

    a projection microscope and a simple test system in fields of vision systematically selected from the whole tumor area of one routine section, five quantitative histopathologic variables were estimated: the mean nuclear volume, the mean nuclear profile area, the density of nuclear profiles, the volume fraction...... of nuclei to tissue, and the number of mitotic profiles per 10(3) nuclear profiles. For each patient, information was recorded regarding sex, age at diagnosis, and clinical stage of disease. RESULTS: Single-factor analyses showed that a favorable prognosis was associated with early clinical stages (Stages I...... and II) and young age (P stage, age, and mean nuclear...

  1. AFSC/RACE/GAP/Palsson: Gulf of Alaska and Aleutian Islands Biennial Bottom Trawl Survey estimates of catch per unit effort, biomass, population at length, and associated tables

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The GOA/AI Bottom Trawl Estimate database contains abundance estimates for the Alaska Biennial Bottom Trawl Surveys conducted in the Gulf of Alaska and the Aleutian...

  2. Influence of heart motion on cardiac output estimation by means of electrical impedance tomography: a case study

    International Nuclear Information System (INIS)

    Proença, Martin; Braun, Fabian; Rapin, Michael; Solà, Josep; Lemay, Mathieu; Adler, Andy; Grychtol, Bartłomiej; Bohm, Stephan H; Thiran, Jean-Philippe

    2015-01-01

    Electrical impedance tomography (EIT) is a non-invasive imaging technique that can measure cardiac-related intra-thoracic impedance changes. EIT-based cardiac output estimation relies on the assumption that the amplitude of the impedance change in the ventricular region is representative of stroke volume (SV). However, other factors such as heart motion can significantly affect this ventricular impedance change. In the present case study, a magnetic resonance imaging-based dynamic bio-impedance model fitting the morphology of a single male subject was built. Simulations were performed to evaluate the contribution of heart motion and its influence on EIT-based SV estimation. Myocardial deformation was found to be the main contributor to the ventricular impedance change (56%). However, motion-induced impedance changes showed a strong correlation (r = 0.978) with left ventricular volume. We explained this by the quasi-incompressibility of blood and myocardium. As a result, EIT achieved excellent accuracy in estimating a wide range of simulated SV values (error distribution of 0.57 ± 2.19 ml (1.02 ± 2.62%) and correlation of r = 0.996 after a two-point calibration was applied to convert impedance values to millilitres). As the model was based on one single subject, the strong correlation found between motion-induced changes and ventricular volume remains to be verified in larger datasets. (paper)

  3. Surveying Drifting Icebergs and Ice Islands: Deterioration Detection and Mass Estimation with Aerial Photogrammetry and Laser Scanning

    Directory of Open Access Journals (Sweden)

    Anna J. Crawford

    2018-04-01

    Full Text Available Icebergs and ice islands (large, tabular icebergs are challenging targets to survey due to their size, mobility, remote locations, and potentially difficult environmental conditions. Here, we assess the precision and utility of aerial photography surveying with structure-from-motion multi-view stereo photogrammetry processing (SfM and vessel-based terrestrial laser scanning (TLS for iceberg deterioration detection and mass estimation. For both techniques, we determine the minimum amount of change required to reliably resolve iceberg deterioration, the deterioration detection threshold (DDT, using triplicate surveys of two iceberg survey targets. We also calculate their relative uncertainties for iceberg mass estimation. The quality of deployed Global Positioning System (GPS units that were used for drift correction and scale assignment was a major determinant of point cloud precision. When dual-frequency GPS receivers were deployed, DDT values of 2.5 and 0.40 m were calculated for the TLS and SfM point clouds, respectively. In contrast, values of 6.6 and 3.4 m were calculated when tracking beacons with lower-quality GPS were used. The SfM dataset was also more precise when used for iceberg mass estimation, and we recommend further development of this technique for iceberg-related end-uses.

  4. Statistical estimates of absenteeism attributable to seasonal and pandemic influenza from the Canadian Labour Force Survey

    OpenAIRE

    Zheng Hui; Schanzer Dena L; Gilmore Jason

    2011-01-01

    Abstract Background As many respiratory viruses are responsible for influenza like symptoms, accurate measures of the disease burden are not available and estimates are generally based on statistical methods. The objective of this study was to estimate absenteeism rates and hours lost due to seasonal influenza and compare these estimates with estimates of absenteeism attributable to the two H1N1 pandemic waves that occurred in 2009. Methods Key absenteeism variables were extracted from Statis...

  5. Use of tritium for estimation of groundwater mean residence time, a case study of the Ain Al-Samak Karst springs (Central Syria)

    International Nuclear Information System (INIS)

    Kattan, Z.

    2003-01-01

    This work is an attempt to estimate the mean residence time of groundwater in the Ain Al-Tanour and Ain-Samak, which are the major karst springs in the Upper Orontes Basin (Central Syria). This estimate, which consists on the application of a mathematical modeling approach, was based on the use of tritium, as a natural radioisotope tracer and a tool for ground water age dating. By adopting a completely mixed reservoir model, linked with exponential time distribution function, the mean residence time (turnover time) of these two springs was evaluated to be about 50 years. This result is in good agreement with previous estimation obtained for the Figeh main spring, which belongs to the same aquifer (Cenomanian-Turonian complex) in the Damascus Basin. On the basis of this evaluation, a value of about 800 million m 3 was obtained for the maximum groundwater reservoir size

  6. Multifactor Screener in the 2000 National Health Interview Survey Cancer Control Supplement: Uses of Screener Estimates

    Science.gov (United States)

    Dietary intake estimates derived from the Multifactor Screener are rough estimates of usual intake of fruits and vegetables, fiber, calcium, servings of dairy, and added sugar. These estimates are not as accurate as those from more detailed methods (e.g., 24-hour recalls).

  7. Using cross-sectional surveys to estimate the number of severely malnourished children needing to be enrolled in specific treatment programmes

    DEFF Research Database (Denmark)

    Dale, Nancy M; Myatt, Mark; Prudhon, Claudine

    2017-01-01

    OBJECTIVE: When planning severe acute malnutrition (SAM) treatment services, estimates of the number of children requiring treatment are needed. Prevalence surveys, used with population estimates, can directly estimate the number of prevalent cases but not the number of subsequent incident cases...... in different contexts. DESIGN: Observational study, with J estimated by correlating expected numbers of children to be treated, based on prevalence surveys, population estimates and assumed coverage, with the observed numbers of SAM patients treated. SETTING: Survey and programme data from six African...

  8. Estimation of mean tree stand volume using high-resolution aerial RGB imagery and digital surface model, obtained from sUAV and Trestima mobile application

    Directory of Open Access Journals (Sweden)

    G. K. Rybakov

    2017-06-01

    Full Text Available This study considers a remote sensing technique for mean volume estimation based on a very high-resolution (VHR aerial RGB imagery obtained using a small-sized unmanned aerial vehicle (sUAV and a high-resolution photogrammetric digital surface model (DSM as well as an innovative technology for field measurements (Trestima. The study area covers approx. 220 ha of forestland in Finland. The work concerns the entire process from remote sensing and field data acquisition to statistical analysis and forest volume wall-to-wall mapping. The study showed that the VHR aerial imagery and the high-resolution DSM produced based on the application of the sUAV have good prospects for forest inventory. For the sUAV based estimation of forest variables such as Height, Basal Area and mean Volume, Root Mean Square Error constituted 6.6 %, 22.6 % and 26.7 %, respectively. Application of Trestima for estimation of the mean volume of the standing forest showed minor difference over the existing Forest Management Plan at all the selected forest compartments. Simultaneously, the results of the study confirmed that the technologies and the tools applied at this work could be a reliable and potentially cost-effective means of forest data acquisition with high potential of operational use.

  9. Mobile Phone Surveys for Collecting Population-Level Estimates in Low- and Middle-Income Countries: A Literature Review.

    Science.gov (United States)

    Gibson, Dustin G; Pereira, Amanda; Farrenkopf, Brooke A; Labrique, Alain B; Pariyo, George W; Hyder, Adnan A

    2017-05-05

    National and subnational level surveys are important for monitoring disease burden, prioritizing resource allocation, and evaluating public health policies. As mobile phone access and ownership become more common globally, mobile phone surveys (MPSs) offer an opportunity to supplement traditional public health household surveys. The objective of this study was to systematically review the current landscape of MPSs to collect population-level estimates in low- and middle-income countries (LMICs). Primary and gray literature from 7 online databases were systematically searched for studies that deployed MPSs to collect population-level estimates. Titles and abstracts were screened on primary inclusion and exclusion criteria by two research assistants. Articles that met primary screening requirements were read in full and screened for secondary eligibility criteria. Articles included in review were grouped into the following three categories by their survey modality: (1) interactive voice response (IVR), (2) short message service (SMS), and (3) human operator or computer-assisted telephone interviews (CATI). Data were abstracted by two research assistants. The conduct and reporting of the review conformed to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. A total of 6625 articles were identified through the literature review. Overall, 11 articles were identified that contained 19 MPS (CATI, IVR, or SMS) surveys to collect population-level estimates across a range of topics. MPSs were used in Latin America (n=8), the Middle East (n=1), South Asia (n=2), and sub-Saharan Africa (n=8). Nine articles presented results for 10 CATI surveys (10/19, 53%). Two articles discussed the findings of 6 IVR surveys (6/19, 32%). Three SMS surveys were identified from 2 articles (3/19, 16%). Approximately 63% (12/19) of MPS were delivered to mobile phone numbers collected from previously administered household surveys. The majority of MPS (11

  10. Estimating mast production: an evaluation of visual surveys and comparison with seed traps using white oaks

    Science.gov (United States)

    Roger W. Perry; Ronald E. Thill

    1999-01-01

    Perry and Thill compared five types of visual mast surveyed with seed trap data from 105 white oaks (Quercus alba L.) during 1996-1997 in the Ouachita Mountains of Arkansas. They also evaluated these visual survey methods for their usefulness in detecting differences in acorn density among areas. Indices derived from all five methods were highly...

  11. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    International Nuclear Information System (INIS)

    Sorini, D.

    2017-01-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ''light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. An analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ∼ 0.80 h Mpc −1 and within 10% up to k ∼ 0.94 h Mpc −1 , well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.

  12. ESTIMATING PHOTOMETRIC REDSHIFTS OF QUASARS VIA THE k-NEAREST NEIGHBOR APPROACH BASED ON LARGE SURVEY DATABASES

    International Nuclear Information System (INIS)

    Zhang Yanxia; Ma He; Peng Nanbo; Zhao Yongheng; Wu Xuebing

    2013-01-01

    We apply one of the lazy learning methods, the k-nearest neighbor (kNN) algorithm, to estimate the photometric redshifts of quasars based on various data sets from the Sloan Digital Sky Survey (SDSS), the UKIRT Infrared Deep Sky Survey (UKIDSS), and the Wide-field Infrared Survey Explorer (WISE; the SDSS sample, the SDSS-UKIDSS sample, the SDSS-WISE sample, and the SDSS-UKIDSS-WISE sample). The influence of the k value and different input patterns on the performance of kNN is discussed. kNN performs best when k is different with a special input pattern for a special data set. The best result belongs to the SDSS-UKIDSS-WISE sample. The experimental results generally show that the more information from more bands, the better performance of photometric redshift estimation with kNN. The results also demonstrate that kNN using multiband data can effectively solve the catastrophic failure of photometric redshift estimation, which is met by many machine learning methods. Compared with the performance of various other methods of estimating the photometric redshifts of quasars, kNN based on KD-Tree shows superiority, exhibiting the best accuracy.

  13. ESTIMATING PHOTOMETRIC REDSHIFTS OF QUASARS VIA THE k-NEAREST NEIGHBOR APPROACH BASED ON LARGE SURVEY DATABASES

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yanxia; Ma He; Peng Nanbo; Zhao Yongheng [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, 100012 Beijing (China); Wu Xuebing, E-mail: zyx@bao.ac.cn [Department of Astronomy, Peking University, 100871 Beijing (China)

    2013-08-01

    We apply one of the lazy learning methods, the k-nearest neighbor (kNN) algorithm, to estimate the photometric redshifts of quasars based on various data sets from the Sloan Digital Sky Survey (SDSS), the UKIRT Infrared Deep Sky Survey (UKIDSS), and the Wide-field Infrared Survey Explorer (WISE; the SDSS sample, the SDSS-UKIDSS sample, the SDSS-WISE sample, and the SDSS-UKIDSS-WISE sample). The influence of the k value and different input patterns on the performance of kNN is discussed. kNN performs best when k is different with a special input pattern for a special data set. The best result belongs to the SDSS-UKIDSS-WISE sample. The experimental results generally show that the more information from more bands, the better performance of photometric redshift estimation with kNN. The results also demonstrate that kNN using multiband data can effectively solve the catastrophic failure of photometric redshift estimation, which is met by many machine learning methods. Compared with the performance of various other methods of estimating the photometric redshifts of quasars, kNN based on KD-Tree shows superiority, exhibiting the best accuracy.

  14. Estimation of ecosystem respiration and its components by means of stable isotopes and improved closed-chamber methods

    DEFF Research Database (Denmark)

    Brændholt, Andreas

    Ecosystem respiration (Reco) is the second largest flux of CO2 between the biosphere and the atmosphere. It consists of several components, such as plant respiration and soil respiration (Rsoil), each of which may respond differently to abiotic factors, and thus to global climate change. Rsoil...... and abiotic factors, and before estimating Rsoil fluxes over longer time scales. The work also shows that artificial turbulent air mixing may provide a method to overcome the issue with overestimated fluxes, allowing for measurements even at low atmospheric turbulence. Furthermore, the results show...

  15. Validation of the Social and Emotional Health Survey for Five Sociocultural Groups: Multigroup Invariance and Latent Mean Analyses

    Science.gov (United States)

    You, Sukkyung; Furlong, Michael; Felix, Erika; O'Malley, Meagan

    2015-01-01

    Social-emotional health influences youth developmental trajectories and there is growing interest among educators to measure the social-emotional health of the students they serve. This study replicated the psychometric characteristics of the Social Emotional Health Survey (SEHS) with a diverse sample of high school students (Grades 9-12; N =…

  16. Estimating micro area behavioural risk factor prevalence from large population-based surveys: a full Bayesian approach

    Directory of Open Access Journals (Sweden)

    L. Seliske

    2016-06-01

    Full Text Available Abstract Background An important public health goal is to decrease the prevalence of key behavioural risk factors, such as tobacco use and obesity. Survey information is often available at the regional level, but heterogeneity within large geographic regions cannot be assessed. Advanced spatial analysis techniques are demonstrated to produce sensible micro area estimates of behavioural risk factors that enable identification of areas with high prevalence. Methods A spatial Bayesian hierarchical model was used to estimate the micro area prevalence of current smoking and excess bodyweight for the Erie-St. Clair region in southwestern Ontario. Estimates were mapped for male and female respondents of five cycles of the Canadian Community Health Survey (CCHS. The micro areas were 2006 Census Dissemination Areas, with an average population of 400–700 people. Two individual-level models were specified: one controlled for survey cycle and age group (model 1, and one controlled for survey cycle, age group and micro area median household income (model 2. Post-stratification was used to derive micro area behavioural risk factor estimates weighted to the population structure. SaTScan analyses were conducted on the granular, postal-code level CCHS data to corroborate findings of elevated prevalence. Results Current smoking was elevated in two urban areas for both sexes (Sarnia and Windsor, and an additional small community (Chatham for males only. Areas of excess bodyweight were prevalent in an urban core (Windsor among males, but not females. Precision of the posterior post-stratified current smoking estimates was improved in model 2, as indicated by narrower credible intervals and a lower coefficient of variation. For excess bodyweight, both models had similar precision. Aggregation of the micro area estimates to CCHS design-based estimates validated the findings. Conclusions This is among the first studies to apply a full Bayesian model to complex

  17. Computing the Deflection of the Vertical for Improving Aerial Surveys: A Comparison between EGM2008 and ITALGEO05 Estimates

    Directory of Open Access Journals (Sweden)

    Riccardo Barzaghi

    2016-07-01

    Full Text Available Recent studies on the influence of the anomalous gravity field in GNSS/INS applications have shown that neglecting the impact of the deflection of vertical in aerial surveys induces horizontal and vertical errors in the measurement of an object that is part of the observed scene; these errors can vary from a few tens of centimetres to over one meter. The works reported in the literature refer to vertical deflection values based on global geopotential model estimates. In this paper we compared this approach with the one based on local gravity data and collocation methods. In particular, denoted by ξ and η, the two mutually-perpendicular components of the deflection of the vertical vector (in the north and east directions, respectively, their values were computed by collocation in the framework of the Remove-Compute-Restore technique, applied to the gravity database used for estimating the ITALGEO05 geoid. Following this approach, these values have been computed at different altitudes that are relevant in aerial surveys. The (ξ, η values were then also estimated using the high degree EGM2008 global geopotential model and compared with those obtained in the previous computation. The analysis of the differences between the two estimates has shown that the (ξ, η global geopotential model estimate can be reliably used in aerial navigation applications that require the use of sensors connected to a GNSS/INS system only above a given height (e.g., 3000 m in this paper that must be defined by simulations.

  18. Daily estimates of the migrating tide and zonal mean temperature in the mesosphere and lower thermosphere derived from SABER data

    Science.gov (United States)

    Ortland, David A.

    2017-04-01

    Satellites provide a global view of the structure in the fields that they measure. In the mesosphere and lower thermosphere, the dominant features in these fields at low zonal wave number are contained in the zonal mean, quasi-stationary planetary waves, and tide components. Due to the nature of the satellite sampling pattern, stationary, diurnal, and semidiurnal components are aliased and spectral methods are typically unable to separate the aliased waves over short time periods. This paper presents a data processing scheme that is able to recover the daily structure of these waves and the zonal mean state. The method is validated by using simulated data constructed from a mechanistic model, and then applied to Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) temperature measurements. The migrating diurnal tide extracted from SABER temperatures for 2009 has a seasonal variability with peak amplitude (20 K at 95 km) in February and March and minimum amplitude (less than 5 K at 95 km) in early June and early December. Higher frequency variability includes a change in vertical structure and amplitude during the major stratospheric warming in January. The migrating semidiurnal tide extracted from SABER has variability on a monthly time scale during January through March, minimum amplitude in April, and largest steady amplitudes from May through September. Modeling experiments were performed that show that much of the variability on seasonal time scales in the migrating tides is due to changes in the mean flow structure and the superposition of the tidal responses to water vapor heating in the troposphere and ozone heating in the stratosphere and lower mesosphere.

  19. Estimation of illicit drug use in the main cities of Colombia by means of urban wastewater analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bijlsma, Lubertus; Botero-Coy, Ana M. [Research Institute for Pesticides and Water (IUPA), University Jaume I, Castellón (Spain); Rincón, Rolando J. [Chemistry Department, Faculty of Sciences, University Antonio Nariño (Colombia); Peñuela, Gustavo A. [Grupo GDCON, Facultad de Ingeniería, Universidad de Antioquia, 70 # 52-21, Medellin (Colombia); Hernández, Félix, E-mail: felix.hernandez@uji.es [Research Institute for Pesticides and Water (IUPA), University Jaume I, Castellón (Spain)

    2016-09-15

    Wastewater-based epidemiology (WBE) relies on the principle that traces of compounds, which a population is exposed to or consume, are excreted unchanged or as metabolites in urine and/or feces, and ultimately end up in the sewer network. Measuring target metabolic residues i.e. biomarkers in raw urban wastewater allows identifying the exposure or use of substances of interest in a community. Up to date, the most popular application of WBE is the estimation of illicit drug use and studies have been made mainly across Europe, which has allowed estimating and comparing drug use in many European cities. However, until now a comprehensive study applying WBE on the most frequently consumed illicit drugs has not been performed in South American countries. In this work, we applied this approach to samples from Colombia, selecting two of the most populated cities: Bogotá and Medellin. Several biomarkers were selected to estimate drug use of cocaine, cannabis, amphetamine, methamphetamine, MDMA (ecstasy), heroin and ketamine. Composite samples (24-h) were collected at the corresponding municipal wastewater treatment plants. Sample treatment was performed at location by applying solid-phase extraction (SPE). Before SPE, the samples were spiked with appropriate isotope labeled internal standards. In parallel, samples (spiked with the analytes under study at two concentration levels) were also processed for quality control. Analysis of influent wastewater was made by liquid chromatography-tandem mass spectrometry, with triple quadrupole analyzer. Data shown in this paper reveal a high use of cocaine by the population of the selected Colombian cities, particularly from Medellin, while the use of other illicit drugs were low. The relevance of using quality control samples, particularly in collaborative studies, as those presented in this work, where research groups from different countries participate and where the samples had to be shipped overseas, is highlighted in this

  20. Estimation of illicit drug use in the main cities of Colombia by means of urban wastewater analysis

    International Nuclear Information System (INIS)

    Bijlsma, Lubertus; Botero-Coy, Ana M.; Rincón, Rolando J.; Peñuela, Gustavo A.; Hernández, Félix

    2016-01-01

    Wastewater-based epidemiology (WBE) relies on the principle that traces of compounds, which a population is exposed to or consume, are excreted unchanged or as metabolites in urine and/or feces, and ultimately end up in the sewer network. Measuring target metabolic residues i.e. biomarkers in raw urban wastewater allows identifying the exposure or use of substances of interest in a community. Up to date, the most popular application of WBE is the estimation of illicit drug use and studies have been made mainly across Europe, which has allowed estimating and comparing drug use in many European cities. However, until now a comprehensive study applying WBE on the most frequently consumed illicit drugs has not been performed in South American countries. In this work, we applied this approach to samples from Colombia, selecting two of the most populated cities: Bogotá and Medellin. Several biomarkers were selected to estimate drug use of cocaine, cannabis, amphetamine, methamphetamine, MDMA (ecstasy), heroin and ketamine. Composite samples (24-h) were collected at the corresponding municipal wastewater treatment plants. Sample treatment was performed at location by applying solid-phase extraction (SPE). Before SPE, the samples were spiked with appropriate isotope labeled internal standards. In parallel, samples (spiked with the analytes under study at two concentration levels) were also processed for quality control. Analysis of influent wastewater was made by liquid chromatography-tandem mass spectrometry, with triple quadrupole analyzer. Data shown in this paper reveal a high use of cocaine by the population of the selected Colombian cities, particularly from Medellin, while the use of other illicit drugs were low. The relevance of using quality control samples, particularly in collaborative studies, as those presented in this work, where research groups from different countries participate and where the samples had to be shipped overseas, is highlighted in this

  1. A New Model of the Mean Albedo of the Earth: Estimation and Validation from the GRACE Mission and SLR Satellites.

    Science.gov (United States)

    Deleflie, F.; Sammuneh, M. A.; Coulot, D.; Pollet, A.; Biancale, R.; Marty, J. C.

    2017-12-01

    This talk provides new results of a study that we began last year, and that was the subject of a poster by the same authors presented during AGU FM 2016, entitled « Mean Effect of the Albedo of the Earth on Artificial Satellite Trajectories: an Update Over 2000-2015. »The emissivity of the Earth, split into a part in the visible domain (albedo) and the infrared domain (thermic emissivity), is at the origin of non gravitational perturbations on artificial satellite trajectories. The amplitudes and periods of these perturbations can be investigated if precise orbits can be carried out, and reveal some characteristics of the space environment where the satellite is orbiting. Analyzing the perturbations is, hence, a way to characterize how the energy from the Sun is re-emitted by the Earth. When led over a long period of time, such an approach enables to quantify the variations of the global radiation budget of the Earth.Additionally to the preliminary results presented last year, we draw an assessment of the validity of the mean model based on the orbits of the GRACE missions, and, to a certain extent, of some of the SLR satellite orbits. The accelerometric data of the GRACE satellites are used to evaluate the accuracy of the models accounting for non gravitational forces, and the ones induced by the albedo and the thermic emissivity in particular. Three data sets are used to investigate the mean effects on the orbit perturbations: Stephens tables (Stephens, 1980), ECMWF (European Centre for Medium-Range Weather Forecasts) data sets and CERES (Clouds and the Earth's Radiant Energy System) data sets (publickly available). From the trajectography point of view, based on post-fit residual analysis, we analyze what is the data set leading to the lowest residual level, to define which data set appears to be the most suitable one to derive a new « mean albedo model » from accelerometric data sets of the GRACE mission. The period of investigation covers the full GRACE

  2. Cystic echinococcosis in marketed offal of sheep in Basrah, Iraq: Abattoir-based survey and a probabilistic model estimation of the direct economic losses due to hydatid cyst.

    Science.gov (United States)

    Abdulhameed, Mohanad F; Habib, Ihab; Al-Azizz, Suzan A; Robertson, Ian

    2018-02-01

    Cystic echinococcosis (CE) is a highly endemic parasitic zoonosis in Iraq with substantial impacts on livestock productivity and human health. The objectives of this study were to study the abattoir-based occurrence of CE in marketed offal of sheep in Basrah province, Iraq, and to estimate, using a probabilistic modelling approach, the direct economic losses due to hydatid cysts. Based on detailed visual meat inspection, results from an active abattoir survey in this study revealed detection of hydatid cysts in 7.3% (95% CI: 5.4; 9.6) of 631 examined sheep carcasses. Post-mortem lesions of hydatid cyst were concurrently present in livers and lungs of more than half (54.3% (25/46)) of the positive sheep. Direct economic losses due to hydatid cysts in marketed offal were estimated using data from government reports, the one abattoir survey completed in this study, and expert opinions of local veterinarians and butchers. A Monte-Carlo simulation model was developed in a spreadsheet utilizing Latin Hypercube sampling to account for uncertainty in the input parameters. The model estimated that the average annual economic losses associated with hydatid cysts in the liver and lungs of sheep marketed for human consumption in Basrah to be US$72,470 (90% Confidence Interval (CI); ±11,302). The mean proportion of annual losses in meat products value (carcasses and offal) due to hydatid cysts in the liver and lungs of sheep marketed in Basrah province was estimated as 0.42% (90% CI; ±0.21). These estimates suggest that CE is responsible for considerable livestock-associated monetary losses in the south of Iraq. These findings can be used to inform different regional CE control program options in Iraq.

  3. Reproducing Electric Field Observations during Magnetic Storms by means of Rigorous 3-D Modelling and Distortion Matrix Co-estimation

    Science.gov (United States)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2015-04-01

    Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites

  4. The Influence of Survey Methodology in Estimating Prevalence Rates of Childhood Sexual Abuse Among Navy Recruits

    National Research Council Canada - National Science Library

    Olson, Cheryl B; Stander, Valerie A; Merril, Lex L

    2000-01-01

    ...% of the participants self-defined themselves as abused. Despite these differences in abuse rates, data from the SHIP survey, from SRB operational definitions, and from SRB self-definitions all independently accounted for variability in participants...

  5. Use of Bayesian networks classifiers for long-term mean wind turbine energy output estimation at a potential wind energy conversion site

    Energy Technology Data Exchange (ETDEWEB)

    Carta, Jose A. [Department of Mechanical Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Velazquez, Sergio [Department of Electronics and Automatics Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Matias, J.M. [Department of Statistics, University of Vigo, Lagoas Marcosende, 36200 Vigo (Spain)

    2011-02-15

    Due to the interannual variability of wind speed a feasibility analysis for the installation of a Wind Energy Conversion System at a particular site requires estimation of the long-term mean wind turbine energy output. A method is proposed in this paper which, based on probabilistic Bayesian networks (BNs), enables estimation of the long-term mean wind speed histogram for a site where few measurements of the wind resource are available. For this purpose, the proposed method allows the use of multiple reference stations with a long history of wind speed and wind direction measurements. That is to say, the model that is proposed in this paper is able to involve and make use of regional information about the wind resource. With the estimated long-term wind speed histogram and the power curve of a wind turbine it is possible to use the method of bins to determine the long-term mean energy output for that wind turbine. The intelligent system employed, the knowledgebase of which is a joint probability function of all the model variables, uses efficient calculation techniques for conditional probabilities to perform the reasoning. This enables automatic model learning and inference to be performed efficiently based on the available evidence. The proposed model is applied in this paper to wind speeds and wind directions recorded at four weather stations located in the Canary Islands (Spain). Ten years of mean hourly wind speed and direction data are available for these stations. One of the conclusions reached is that the BN with three reference stations gave fewer errors between the real and estimated long-term mean wind turbine energy output than when using two measure-correlate-predict algorithms which were evaluated and which use a linear regression between the candidate station and one reference station. (author)

  6. Use of Bayesian networks classifiers for long-term mean wind turbine energy output estimation at a potential wind energy conversion site

    International Nuclear Information System (INIS)

    Carta, Jose A.; Velazquez, Sergio; Matias, J.M.

    2011-01-01

    Due to the interannual variability of wind speed a feasibility analysis for the installation of a Wind Energy Conversion System at a particular site requires estimation of the long-term mean wind turbine energy output. A method is proposed in this paper which, based on probabilistic Bayesian networks (BNs), enables estimation of the long-term mean wind speed histogram for a site where few measurements of the wind resource are available. For this purpose, the proposed method allows the use of multiple reference stations with a long history of wind speed and wind direction measurements. That is to say, the model that is proposed in this paper is able to involve and make use of regional information about the wind resource. With the estimated long-term wind speed histogram and the power curve of a wind turbine it is possible to use the method of bins to determine the long-term mean energy output for that wind turbine. The intelligent system employed, the knowledgebase of which is a joint probability function of all the model variables, uses efficient calculation techniques for conditional probabilities to perform the reasoning. This enables automatic model learning and inference to be performed efficiently based on the available evidence. The proposed model is applied in this paper to wind speeds and wind directions recorded at four weather stations located in the Canary Islands (Spain). Ten years of mean hourly wind speed and direction data are available for these stations. One of the conclusions reached is that the BN with three reference stations gave fewer errors between the real and estimated long-term mean wind turbine energy output than when using two measure-correlate-predict algorithms which were evaluated and which use a linear regression between the candidate station and one reference station.

  7. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali.

    Science.gov (United States)

    Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco

    2012-10-12

    Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  8. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    Directory of Open Access Journals (Sweden)

    Minetti Andrea

    2012-10-01

    Full Text Available Abstract Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i health areas not requiring supplemental activities; ii health areas requiring additional vaccination; iii health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3, standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15 are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  9. Methodological issues in the estimation of parental time – Analysis of measures in a Canadian time-use survey

    OpenAIRE

    Cara B. Fedick; Shelley Pacholok; Anne H. Gauthier

    2005-01-01

    Extensive small scale studies have documented that when people assume the role of assisting a person with impairments or an older person, care activities account for a significant portion of their daily routines. Nevertheless, little research has investigated the problem of measuring the time that carers spend in care-related activities. This paper contrasts two different measures of care time – an estimated average weekly hours question in the 1998 Australian Survey of Disability, Ageing and...

  10. Effect of Antihypertensive Therapy on SCORE-Estimated Total Cardiovascular Risk: Results from an Open-Label, Multinational Investigation—The POWER Survey

    Directory of Open Access Journals (Sweden)

    Guy De Backer

    2013-01-01

    Full Text Available Background. High blood pressure is a substantial risk factor for cardiovascular disease. Design & Methods. The Physicians' Observational Work on patient Education according to their vascular Risk (POWER survey was an open-label investigation of eprosartan-based therapy (EBT for control of high blood pressure in primary care centers in 16 countries. A prespecified element of this research was appraisal of the impact of EBT on estimated 10-year risk of a fatal cardiovascular event as determined by the Systematic Coronary Risk Evaluation (SCORE model. Results. SCORE estimates of CVD risk were obtained at baseline from 12,718 patients in 15 countries (6504 men and from 9577 patients at 6 months. During EBT mean (±SD systolic/diastolic blood pressures declined from 160.2 ± 13.7/94.1 ± 9.1 mmHg to 134.5 ± 11.2/81.4 ± 7.4 mmHg. This was accompanied by a 38% reduction in mean SCORE-estimated CVD risk and an improvement in SCORE risk classification of one category or more in 3506 patients (36.6%. Conclusion. Experience in POWER affirms that (a effective pharmacological control of blood pressure is feasible in the primary care setting and is accompanied by a reduction in total CVD risk and (b the SCORE instrument is effective in this setting for the monitoring of total CVD risk.

  11. Effect of Antihypertensive Therapy on SCORE-Estimated Total Cardiovascular Risk: Results from an Open-Label, Multinational Investigation—The POWER Survey

    Science.gov (United States)

    De Backer, Guy; Petrella, Robert J.; Goudev, Assen R.; Radaideh, Ghazi Ahmad; Rynkiewicz, Andrzej; Pathak, Atul

    2013-01-01

    Background. High blood pressure is a substantial risk factor for cardiovascular disease. Design & Methods. The Physicians' Observational Work on patient Education according to their vascular Risk (POWER) survey was an open-label investigation of eprosartan-based therapy (EBT) for control of high blood pressure in primary care centers in 16 countries. A prespecified element of this research was appraisal of the impact of EBT on estimated 10-year risk of a fatal cardiovascular event as determined by the Systematic Coronary Risk Evaluation (SCORE) model. Results. SCORE estimates of CVD risk were obtained at baseline from 12,718 patients in 15 countries (6504 men) and from 9577 patients at 6 months. During EBT mean (±SD) systolic/diastolic blood pressures declined from 160.2 ± 13.7/94.1 ± 9.1 mmHg to 134.5 ± 11.2/81.4 ± 7.4 mmHg. This was accompanied by a 38% reduction in mean SCORE-estimated CVD risk and an improvement in SCORE risk classification of one category or more in 3506 patients (36.6%). Conclusion. Experience in POWER affirms that (a) effective pharmacological control of blood pressure is feasible in the primary care setting and is accompanied by a reduction in total CVD risk and (b) the SCORE instrument is effective in this setting for the monitoring of total CVD risk. PMID:23997946

  12. THE EVOLUTION OF ANNUAL MEAN TEMPERATURE AND PRECIPITATION QUANTITY VARIABILITY BASED ON ESTIMATED CHANGES BY THE REGIONAL CLIMATIC MODELS

    Directory of Open Access Journals (Sweden)

    Paula Furtună

    2013-03-01

    Full Text Available Climatic changes are representing one of the major challenges of our century, these being forcasted according to climate scenarios and models, which represent plausible and concrete images of future climatic conditions. The results of climate models comparison regarding future water resources and temperature regime trend can become a useful instrument for decision makers in choosing the most effective decisions regarding economic, social and ecologic levels. The aim of this article is the analysis of temperature and pluviometric variability at the closest grid point to Cluj-Napoca, based on data provided by six different regional climate models (RCMs. Analysed on 30 year periods (2001-2030,2031-2060 and 2061-2090, the mean temperature has an ascending general trend, with great varability between periods. The precipitation expressed trough percentage deviation shows a descending general trend, which is more emphazied during 2031-2060 and 2061-2090.

  13. Detection of uterine MMG contractions using a multiple change point estimator and the K-means cluster algorithm.

    Science.gov (United States)

    La Rosa, Patricio S; Nehorai, Arye; Eswaran, Hari; Lowery, Curtis L; Preissl, Hubert

    2008-02-01

    We propose a single channel two-stage time-segment discriminator of uterine magnetomyogram (MMG) contractions during pregnancy. We assume that the preprocessed signals are piecewise stationary having distribution in a common family with a fixed number of parameters. Therefore, at the first stage, we propose a model-based segmentation procedure, which detects multiple change-points in the parameters of a piecewise constant time-varying autoregressive model using a robust formulation of the Schwarz information criterion (SIC) and a binary search approach. In particular, we propose a test statistic that depends on the SIC, derive its asymptotic distribution, and obtain closed-form optimal detection thresholds in the sense of the Neyman-Pearson criterion; therefore, we control the probability of false alarm and maximize the probability of change-point detection in each stage of the binary search algorithm. We compute and evaluate the relative energy variation [root mean squares (RMS)] and the dominant frequency component [first order zero crossing (FOZC)] in discriminating between time segments with and without contractions. The former consistently detects a time segment with contractions. Thus, at the second stage, we apply a nonsupervised K-means cluster algorithm to classify the detected time segments using the RMS values. We apply our detection algorithm to real MMG records obtained from ten patients admitted to the hospital for contractions with gestational ages between 31 and 40 weeks. We evaluate the performance of our detection algorithm in computing the detection and false alarm rate, respectively, using as a reference the patients' feedback. We also analyze the fusion of the decision signals from all the sensors as in the parallel distributed detection approach.

  14. Modelling shallow landslide susceptibility by means of a subsurface flow path connectivity index and estimates of soil depth spatial distribution

    Directory of Open Access Journals (Sweden)

    C. Lanni

    2012-11-01

    Full Text Available Topographic index-based hydrological models have gained wide use to describe the hydrological control on the triggering of rainfall-induced shallow landslides at the catchment scale. A common assumption in these models is that a spatially continuous water table occurs simultaneously across the catchment. However, during a rainfall event isolated patches of subsurface saturation form above an impeding layer and their hydrological connectivity is a necessary condition for lateral flow initiation at a point on the hillslope.

    Here, a new hydrological model is presented, which allows us to account for the concept of hydrological connectivity while keeping the simplicity of the topographic index approach. A dynamic topographic index is used to describe the transient lateral flow that is established at a hillslope element when the rainfall amount exceeds a threshold value allowing for (a development of a perched water table above an impeding layer, and (b hydrological connectivity between the hillslope element and its own upslope contributing area. A spatially variable soil depth is the main control of hydrological connectivity in the model. The hydrological model is coupled with the infinite slope stability model and with a scaling model for the rainfall frequency–duration relationship to determine the return period of the critical rainfall needed to cause instability on three catchments located in the Italian Alps, where a survey of soil depth spatial distribution is available. The model is compared with a quasi-dynamic model in which the dynamic nature of the hydrological connectivity is neglected. The results show a better performance of the new model in predicting observed shallow landslides, implying that soil depth spatial variability and connectivity bear a significant control on shallow landsliding.

  15. Estimates of reservoir methane emissions based on a spatially balanced probabilistic-survey

    Science.gov (United States)

    Global estimates of methane (CH4) emissions from reservoirs are poorly constrained, partly due to the challenges of accounting for intra-reservoir spatial variability. Reservoir-scale emission rates are often estimated by extrapolating from measurement made at a few locations; h...

  16. Influence of the level of fit of a density probability function to wind-speed data on the WECS mean power output estimation

    International Nuclear Information System (INIS)

    Carta, Jose A.; Ramirez, Penelope; Velazquez, Sergio

    2008-01-01

    Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error ε made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R 2 statistic (R a 2 ). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the R a 2 statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as R a 2 increases

  17. Using heat as a tracer to estimate spatially distributed mean residence times in the hyporheic zone of a riffle-pool sequence

    Science.gov (United States)

    Naranjo, Ramon C.

    2013-01-01

    Biochemical reactions that occur in the hyporheic zone are highly dependent on the time solutes that are in contact with sediments of the riverbed. In this investigation, we developed a 2-D longitudinal flow and solute-transport model to estimate the spatial distribution of mean residence time in the hyporheic zone. The flow model was calibrated using observations of temperature and pressure, and the mean residence times were simulated using the age-mass approach for steady-state flow conditions. The approach used in this investigation includes the mixing of different ages and flow paths of water through advection and dispersion. Uncertainty of flow and transport parameters was evaluated using standard Monte Carlo and the generalized likelihood uncertainty estimation method. Results of parameter estimation support the presence of a low-permeable zone in the riffle area that induced horizontal flow at a shallow depth within the riffle area. This establishes shallow and localized flow paths and limits deep vertical exchange. For the optimal model, mean residence times were found to be relatively long (9–40.0 days). The uncertainty of hydraulic conductivity resulted in a mean interquartile range (IQR) of 13 days across all piezometers and was reduced by 24% with the inclusion of temperature and pressure observations. To a lesser extent, uncertainty in streambed porosity and dispersivity resulted in a mean IQR of 2.2 and 4.7 days, respectively. Alternative conceptual models demonstrate the importance of accounting for the spatial distribution of hydraulic conductivity in simulating mean residence times in a riffle-pool sequence.

  18. Influence of the level of fit of a density probability function to wind-speed data on the WECS mean power output estimation

    Energy Technology Data Exchange (ETDEWEB)

    Carta, Jose A. [Department of Mechanical Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Ramirez, Penelope; Velazquez, Sergio [Department of Renewable Energies, Technological Institute of the Canary Islands, Pozo Izquierdo Beach s/n, 35119 Santa Lucia, Gran Canaria, Canary Islands (Spain)

    2008-10-15

    Static methods which are based on statistical techniques to estimate the mean power output of a WECS (wind energy conversion system) have been widely employed in the scientific literature related to wind energy. In the static method which we use in this paper, for a given wind regime probability distribution function and a known WECS power curve, the mean power output of a WECS is obtained by resolving the integral, usually using numerical evaluation techniques, of the product of these two functions. In this paper an analysis is made of the influence of the level of fit between an empirical probability density function of a sample of wind speeds and the probability density function of the adjusted theoretical model on the relative error {epsilon} made in the estimation of the mean annual power output of a WECS. The mean power output calculated through the use of a quasi-dynamic or chronological method, that is to say using time-series of wind speed data and the power versus wind speed characteristic of the wind turbine, serves as the reference. The suitability of the distributions is judged from the adjusted R{sup 2} statistic (R{sub a}{sup 2}). Hourly mean wind speeds recorded at 16 weather stations located in the Canarian Archipelago, an extensive catalogue of wind-speed probability models and two wind turbines of 330 and 800 kW rated power are used in this paper. Among the general conclusions obtained, the following can be pointed out: (a) that the R{sub a}{sup 2} statistic might be useful as an initial gross indicator of the relative error made in the mean annual power output estimation of a WECS when a probabilistic method is employed; (b) the relative errors tend to decrease, in accordance with a trend line defined by a second-order polynomial, as R{sub a}{sup 2} increases. (author)

  19. A Survey of Methods for Computing Best Estimates of Endoatmospheric and Exoatmospheric Trajectories

    Science.gov (United States)

    Bernard, William P.

    2018-01-01

    Beginning with the mathematical prediction of planetary orbits in the early seventeenth century up through the most recent developments in sensor fusion methods, many techniques have emerged that can be employed on the problem of endo and exoatmospheric trajectory estimation. Although early methods were ad hoc, the twentieth century saw the emergence of many systematic approaches to estimation theory that produced a wealth of useful techniques. The broad genesis of estimation theory has resulted in an equally broad array of mathematical principles, methods and vocabulary. Among the fundamental ideas and methods that are briefly touched on are batch and sequential processing, smoothing, estimation, and prediction, sensor fusion, sensor fusion architectures, data association, Bayesian and non Bayesian filtering, the family of Kalman filters, models of the dynamics of the phases of a rocket's flight, and asynchronous, delayed, and asequent data. Along the way, a few trajectory estimation issues are addressed and much of the vocabulary is defined.

  20. COMPARING H{alpha} AND H I SURVEYS AS MEANS TO A COMPLETE LOCAL GALAXY CATALOG IN THE ADVANCED LIGO/VIRGO ERA

    Energy Technology Data Exchange (ETDEWEB)

    Metzger, Brian D. [Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08542 (United States); Kaplan, David L. [Physics Department, University of Wisconsin-Milwaukee, Milwaukee, WI 53211 (United States); Berger, Edo, E-mail: bmetzger@astro.princeton.edu, E-mail: kaplan@uwm.edu, E-mail: eberger@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-02-20

    Identifying the electromagnetic counterparts of gravitational wave (GW) sources detected by upcoming networks of advanced ground-based interferometers will be challenging, due in part to the large number of unrelated astrophysical transients within the {approx}10-100 deg{sup 2} sky localizations. A potential way to greatly reduce the number of such false positives is to limit detailed follow-up to only those candidates near galaxies within the GW sensitivity range of {approx}200 Mpc for binary neutron star mergers. Such a strategy is currently hindered by the fact that galaxy catalogs are grossly incomplete within this volume. Here, we compare two methods for completing the local galaxy catalog: (1) a narrowband H{alpha} imaging survey and (2) an H I emission line radio survey. Using H{alpha} fluxes, stellar masses (M {sub *}), and star formation rates (SFRs) from galaxies in the Sloan Digital Sky Survey (SDSS), combined with H I data from the GALEX Arecibo SDSS Survey and the Herschel Reference Survey, we estimate that an H{alpha} survey with a luminosity sensitivity of L {sub H{alpha}} = 10{sup 40} erg s{sup -1} at 200 Mpc could achieve a completeness of f {sup H{alpha}} {sub SFR} Almost-Equal-To 75% with respect to total SFR, but only f{sub M* Star-Operator }{sup H{alpha}} approx. 33% with respect to M {sub *} (due to lack of sensitivity to early-type galaxies). These numbers are significantly lower than those achieved by an idealized spectroscopic survey due to the loss of H{alpha} flux resulting from resolving out nearby galaxies and the inability to correct for the underlying stellar continuum. An H I survey with sensitivity similar to the proposed WALLABY survey on ASKAP could achieve f{sub SFR}{sup H{sub I}} Almost-Equal-To 80% and f{sub M Star-Operator }{sup H{sub I}} Almost-Equal-To 50%, somewhat higher than that of the H{alpha} survey. Finally, both H{alpha} and H I surveys should achieve {approx}> 50% completeness with respect to the host galaxies of

  1. Estimation solar irradiance maps in Extremadura (Spain) by means of meteorological parameters; Mapas de radiacion solar de Extremadura estimada a partir de otros parametros meteorologicos

    Energy Technology Data Exchange (ETDEWEB)

    Ramiro, A.; Nunez, M.; Reyes, J. J.; Gonzalez, J. F.; Sabio, E.; Gonzalez-Garcia, C. M.; Ganan, J.; Roman, S.

    2004-07-01

    In a previous work, we have found correlation expressions that permit to estimate the mean monthly values of daily diffuse and direct solar irradiation on a horizontal surface in function of some weather parameters. In this work, the incident radiation on a horizontal surface has been estimated in thirty zones of Extremadura by means of weather data from existing stations located in these zones and its orography. The weather data used have been the monthly average values of the highest temperatures and the sunshine fraction. These monthly average values have been obtained from measurements carried out in the weather stations during the period 1985-2002. The results are presented as interactive maps in Arc view language, associated to a conventional data base. (Author)

  2. An agent-based approach with collaboration among agents. Estimation of wholesale electricity price on PJM and artificial data generated by a mean reverting model

    International Nuclear Information System (INIS)

    Sueyoshi, Toshiyuki

    2010-01-01

    This study examines the performance of MAIS (Multi-Agent Intelligent Simulator) equipped with various learning capabilities. In addition to the learning capabilities, the proposed MAIS incorporates collaboration among agents. The proposed MAIS is applied to estimate a dynamic change of wholesale electricity price in PJM (Pennsylvania-New Jersey-Mainland) and an artificial data set generated by a mean reverting model. Using such different types of data sets, the methodological validity of MAIS is confirmed by comparing it with other well-known alternatives in computer science. This study finds that the MAIS needs to incorporate both the mean reverting model and the collaboration behavior among agents in order to enhance its estimation capability. The MAIS discussed in this study will provide research on energy economics with a new numerical capability that can investigate a dynamic change of not only wholesale electricity price but also speculation and learning process of traders. (author)

  3. Estimating the oligoelement requirements of children subject to exclusively parenteral nutrition by means of neutron activation analysis

    International Nuclear Information System (INIS)

    Maziere, B.; Gros, J.; Comar, D.

    1979-01-01

    Because of the rich and varied food he eats, deficiencies in oligoelements of dietary origin are very rarely found in man. However, several cases of zinc and copper deficiency have been reported in adults and children subject to prolonged entirely parenteral nutrition. In the present case ten children (eight infants of less than 18 months and two children aged between 2 and 8 years) fed exclusively by intracardiac catheter on a reconstituted diet were studied. The serum concentrations of copper, manganese, selenium and zinc in the children fed on this artificial diet were measured by neutron activation and gamma spectrometry, both with and without chemical separation. The values obtained in the young patients and in controls of the same age were compared. The result of these comparisons and a study of the kinetics of serum concentrations in the patients (one analysis every 20 days for 90 days) enabled us to determine that there was a balanced intake of copper, an excess of manganese and a considerable deficiency in zinc and selenium. In view of these observations, the diet was modified and it was established that the serum oligoelement content followed changes in oligoelement intake. Thus the serum concentrations of selenium and zinc were restored in a few weeks - completely in the case of selenium with an intake three times higher (3 μg/kg/24 h) and incompletely in the case of zinc with the intake doubled (50 μg/kg/24 h). On the basis of these results and kinetic data on the mineral metabolism, we have been able to estimate the copper, manganese, selenium and zinc requirements of children undergoing parenteral nutrition. (author)

  4. Landuse effects on runoff generating processes in tussock grassland indicated by mean transit time estimation using tritium

    Science.gov (United States)

    Stewart, M. K.; Fahey, B. D.

    2010-02-01

    The east Otago uplands of New Zealand's South Island have long been studied because of the environmental consequences of converting native tussock grasslands to other land covers, notably forestry and pasture for stock grazing. Early studies showed that afforestation substantially reduced annual water yield, stream peak flows, and 7-day low flows, mainly as a consequence of increased interception. Tritium measurements have indicated that surprisingly old water is present in catchments GH1 and GH2, and the small headwater wetland and catchment (GH5). The old water contributes strongly to baseflow (and therefore also to quickflow). The data have been simulated assuming the presence of two types of water in the baseflow, young water from shallow aquifers connecting hillside regolith with the stream, and old water from deep bedrock aquifers, respectively. The mean transit time of the young water is of the order of months, while that of the old water is 25-26 years as revealed by the presence of tritium originating from the bomb-peak in NZ rainfall in late 1960s and early 1970s. Such a long transit time indicates slow release from groundwater reservoirs within the bedrock, which constitute by far the larger of the water stores. Comparison of the results from catchments GH1 (tussock) and GH2 (pine forest) suggests that about equal quantities of water (85 mm annually) are contributed from the deep aquifers in the two catchments, although runoff from the shallow aquifers has been strongly reduced by afforestation in GH2.

  5. An estimate of the veteran population in England: based on data from the 2007 Adult Psychiatric Morbidity Survey.

    Science.gov (United States)

    Woodhead, Charlotte; Sloggett, Andy; Bray, Issy; Bradbury, Jason; McManus, Sally; Meltzer, Howard; Brugha, Terry; Jenkins, Rachel; Greenberg, Neil; Wessely, Simon; Fear, Nicola

    2009-01-01

    The health and well-being of military veterans has recently generated much media and political interest. Estimating the current and future size of the veteran population is important to the planning and allocation of veteran support services. Data from a 2007 nationally representative residential survey of England (the Adult Psychiatric Morbidity Survey) were extrapolated to the whole population to estimate the number of veterans currently residing in private households in England. This population was projected forward in two ten-year blocks up to 2027 using a current life table. It was estimated that in 2007, 3,771,534 (95% CI: 2,986,315-4,910,205) veterans were living in residential households in England. By 2027, this figure was predicted to decline by 50.4 per cent, mainly due to large reductions in the number of veterans in the older age groups (65-74 and 75+ years). Approximately three to five million veterans are currently estimated to be living in the community in England. As the proportion of National Service veterans reduces with time, the veteran population is expected to halve over the next 20 years.

  6. The estimation of patients' views on organizational aspects of a general dental practice by general dental practitioners: a survey study

    Directory of Open Access Journals (Sweden)

    Truin Gert-Jan

    2011-10-01

    Full Text Available Abstract Background Considering the changes in dental healthcare, such as the increasing assertiveness of patients, the introduction of new dental professionals, and regulated competition, it becomes more important that general dental practitioners (GDPs take patients' views into account. The aim of the study was to compare patients' views on organizational aspects of general dental practices with those of GDPs and with GDPs' estimation of patients' views. Methods In a survey study, patients and GDPs provided their views on organizational aspects of a general dental practice. In a second, separate survey, GDPs were invited to estimate patients' views on 22 organizational aspects of a general dental practice. Results For 4 of the 22 aspects, patients and GDPs had the same views, and GDPs estimated patients' views reasonably well: 'Dutch-speaking GDP', 'guarantee on treatment', 'treatment by the same GDP', and 'reminder of routine oral examination'. For 2 aspects ('quality assessment' and 'accessibility for disabled patients' patients and GDPs had the same standards, although the GDPs underestimated the patients' standards. Patients had higher standards than GDPs for 7 aspects and lower standards than GDPs for 8 aspects. Conclusion On most aspects GDPs and patient have different views, except for social desirable aspects. Given the increasing assertiveness of patients, it is startling the GDP's estimated only half of the patients' views correctly. The findings of the study can assist GDPs in adapting their organizational services to better meet the preferences of their patients and to improve the communication towards patients.

  7. Determination of mean recency period for estimation of HIV type 1 Incidence with the BED-capture EIA in persons infected with diverse subtypes.

    Science.gov (United States)

    Parekh, Bharat S; Hanson, Debra L; Hargrove, John; Branson, Bernard; Green, Timothy; Dobbs, Trudy; Constantine, Niel; Overbaugh, Julie; McDougal, J Steven

    2011-03-01

    The IgG capture BED enzyme immunoassay (BED-CEIA) was developed to detect recent HIV-1 infection for the estimation of HIV-1 incidence from cross-sectional specimens. The mean time interval between seroconversion and reaching a specified assay cutoff value [referred to here as the mean recency period (ω)], an important parameter for incidence estimation, is determined for some HIV-1 subtypes, but testing in more cohorts and new statistical methods suggest the need for a revised estimation of ω in different subtypes. A total of 2927 longitudinal specimens from 756 persons with incident HIV infections who had been enrolled in 17 cohort studies was tested by the BED-CEIA. The ω was determined using two statistical approaches: (1) linear mixed effects regression (ω(1)) and (2) a nonparametric survival method (ω(2)). Recency periods varied among individuals and by population. At an OD-n cutoff of 0.8, ω(1) was 176 days (95% CL 164-188 days) whereas ω(2) was 162 days (95% CL 152-172 days) when using a comparable subset of specimens (13 cohorts). When method 2 was applied to all available data (17 cohorts), ω(2) ranged from 127 days (Thai AE) to 236 days (subtypes AG, AD) with an overall ω(2) of 197 days (95% CL 173-220). About 70% of individuals reached a threshold OD-n of 0.8 by 197 days (mean ω) and 95% of people reached 0.8 OD-n by 480 days. The determination of ω with more data and new methodology suggests that ω of the BED-CEIA varies between different subtypes and/or populations. These estimates for ω may affect incidence estimates in various studies.

  8. A wide range survey meter for estimating γ- and β-dose rates

    International Nuclear Information System (INIS)

    Jones, A.R.

    1980-09-01

    A survey meter has been developed to measure β-dose rates in the range 0.1 - 100 rad/h (1 mGy/h - 1 Gy/h) and γ-dose rates in the range 1 mrad/h - 100 rad/h (10 μGy/h-1 Gy/h). It also provides an audible warning of high γ-dose rates and an audible and visible warning when a predetermined γ-dose is exceeded. The report describes the design of the survey meter and presents data measured on the performance of an engineering prototype. Factors which affect performance and have been investigated are temperature, battery voltage (and type of battery), GM counter counting loss, direction of incident radiation, and energy of γ-rays. Finally, the application and calibration of the survey meter are discussed. (auth)

  9. Cost-effective sampling of 137Cs-derived net soil redistribution: part 1 – estimating the spatial mean across scales of variation

    International Nuclear Information System (INIS)

    Li, Y.; Chappell, A.; Nyamdavaa, B.; Yu, H.; Davaasuren, D.; Zoljargal, K.

    2015-01-01

    redistribution across scales of variation. • Cost-effective sampling was compared using a case study from the Chinese Loess Plateau. • We recommend estimating the spatial mean using innovative sampling design

  10. Runoff generating processes in adjacent tussock grassland and pine plantation catchments as indicated by mean transit time estimation using tritium

    Directory of Open Access Journals (Sweden)

    M. K. Stewart

    2010-06-01

    Full Text Available The east Otago uplands of New Zealand's South Island have long been studied because of the environmental consequences of converting native tussock grasslands to other land covers, notably forestry and pasture for stock grazing. Early studies showed that afforestation substantially reduced annual water yield, stream peak flows, and 7-day low flows, mainly as a consequence of increased interception. Tritium measurements have indicated that surprisingly old water is present in catchments GH1 and GH2, and the small headwater wetland and catchment (GH5, and contributes strongly to baseflow. The data have been simulated assuming the presence of two types of water in the baseflow, young water from shallow aquifers connecting hillside regolith with the stream, and old water from deep bedrock aquifers, respectively. The mean transit time of the young water is approximately one month, while that of the old water is 25–26 years as revealed by the presence of tritium originating from the bomb-peak in NZ rainfall in late 1960s and early 1970s. Such a long transit time indicates slow release from groundwater reservoirs within the bedrock, which constitute by far the larger of the water stores. Comparison of the results from catchments GH1 (tussock and GH2 (pine forest suggests that about equal quantities of water (85 mm/a are contributed from the deep aquifers in the two catchments, although runoff from the shallow aquifers has been strongly reduced by afforestation in GH2. This study has revealed the presence of a long transit time component of water in runoff in a catchment with crystalline metamorphic bedrock.

  11. Runoff generating processes in adjacent tussock grassland and pine plantation catchments as indicated by mean transit time estimation using tritium

    Science.gov (United States)

    Stewart, M. K.; Fahey, B. D.

    2010-06-01

    The east Otago uplands of New Zealand's South Island have long been studied because of the environmental consequences of converting native tussock grasslands to other land covers, notably forestry and pasture for stock grazing. Early studies showed that afforestation substantially reduced annual water yield, stream peak flows, and 7-day low flows, mainly as a consequence of increased interception. Tritium measurements have indicated that surprisingly old water is present in catchments GH1 and GH2, and the small headwater wetland and catchment (GH5), and contributes strongly to baseflow. The data have been simulated assuming the presence of two types of water in the baseflow, young water from shallow aquifers connecting hillside regolith with the stream, and old water from deep bedrock aquifers, respectively. The mean transit time of the young water is approximately one month, while that of the old water is 25-26 years as revealed by the presence of tritium originating from the bomb-peak in NZ rainfall in late 1960s and early 1970s. Such a long transit time indicates slow release from groundwater reservoirs within the bedrock, which constitute by far the larger of the water stores. Comparison of the results from catchments GH1 (tussock) and GH2 (pine forest) suggests that about equal quantities of water (85 mm/a) are contributed from the deep aquifers in the two catchments, although runoff from the shallow aquifers has been strongly reduced by afforestation in GH2. This study has revealed the presence of a long transit time component of water in runoff in a catchment with crystalline metamorphic bedrock.

  12. Evaluation of alternative age-based methods for estimating relative abundance from survey data in relation to assessment models

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders; Kristensen, Kasper

    2014-01-01

    Indices of abundance from fishery-independent trawl surveys constitute an important source of information for many fish stock assessments. Indices are often calculated using area stratified sample means on age-disaggregated data, and finally treated in stock assessment models as independent...... observations. We evaluate a series of alternative methods for calculating indices of abundance from trawl survey data (delta-lognormal, delta-gamma, and Tweedie using Generalized Additive Models) as well as different error structures for these indices when used as input in an age-based stock assessment model...... the different indices produced. The stratified mean method is found much more imprecise than the alternatives based on GAMs, which are found to be similar. Having time-varying index variances is found to be of minor importance, whereas the independence assumption is not only violated but has significant impact...

  13. Health Insurance Coverage: Early Release of Estimates from the National Health Interview Survey, January -- June 2013

    Science.gov (United States)

    ... from 2010 to 2013 were also evaluated using logistic regression analysis. State-specific health insurance estimates are ... coverage options; compare health insurance plans based on cost, benefits, and other important features; choose a plan; ...

  14. The estimation of sea floor dynamics from bathymetric surveys of a sand wave area

    NARCIS (Netherlands)

    Dorst, Leendert; Roos, Pieter C.; Hulscher, Suzanne J.M.H.; Lindenbergh, R.C.

    2009-01-01

    The analysis of series of offshore bathymetric surveys provides insight into the morphodynamics of the sea floor. This knowledge helps to improve resurvey policies for the maintenance of port approaches and nautical charting, and to validate morphodynamic models. We propose a method for such an

  15. USDA Forest Service National Woodland Owner Survey, 2011-2013: design, implementation, and estimation methods

    Science.gov (United States)

    Brett J. Butler; Brenton J. Dickinson; Jaketon H. Hewes; Sarah M. Butler; Kyle Andrejczyk; Marla. Markowski-Lindsay

    2016-01-01

    The National Woodland Owner Survey (NWOS) is conducted by the U.S. Forest Service, Forest Inventory and Analysis program to increase the understanding of the attitudes, behaviors, and demographics of private forest and woodland ownerships across the United States. The information is intended to help policy makers, resource managers, educators, service providers, and...

  16. Biodiversity estimates from different camera trap surveys: a case study from Osogovo Mt., Bulgaria

    Directory of Open Access Journals (Sweden)

    Diana P. Zlatanova

    2018-06-01

    Full Text Available Inventorying mammal assemblages is vital for their conservation and management, especially when they include rare or endangered species. However, obtaining a correct estimation of the species diversity in a particular area can be challenging due to uncertainties regarding study design and duration. In this paper, we present the biodiversity estimates derived from three unrelated camera trap studies in Osogovo Mt., Bulgaria. They have different duration and positioning schemes of the camera trap locations: Study 1 – grid based, 34 days; Study 2 – random points based, 138 days; Study 3 – locations based on expert opinion, 1437 days. Utilising EstimateS, we compare a number of estimators (Shannon diversity index, Coleman rarefaction curve, ACE (Abundance-based Coverage Estimator, ICE (Incidence-based Coverage Estimator, Chao 1, Chao 2 and Jackknife estimators to the number of present and confirmed and/or potentially present mammals (excluding bats in the mountains. A total of 17 mammal species were registered in the three studies, which represents around 76% of the permanently present mammals in the mountain that inhabit its forested area and can be detected by a camera trap. The results point to some guidelines that can aid future camera trap research in temperate forested areas. A grid-based design works best for very short study periods (e.g. 10 days, while the opportunistic expert-based positioning scheme provides good results for longer studies (approx. a month. However, the grid-based design needs to be further tested for longer periods. Generally, the random points approach does not yield satisfactory results. In agreement with other studies, analysis based on the Jackknife procedure (Jack 2 appears to result in the best estimate of species richness. When performing camera trap studies, special care should be taken to minimise the number of unidentifiable photos and to take into account «trap-shy» individuals. The results from this

  17. Survey of State-Level Cost and Benefit Estimates of Renewable Portfolio Standards

    Energy Technology Data Exchange (ETDEWEB)

    Heeter, J.; Barbose, G.; Bird, L.; Weaver, S.; Flores-Espino, F.; Kuskova-Burns, K.; Wiser, R.

    2014-05-01

    Most renewable portfolio standards (RPS) have five or more years of implementation experience, enabling an assessment of their costs and benefits. Understanding RPS costs and benefits is essential for policymakers evaluating existing RPS policies, assessing the need for modifications, and considering new policies. This study provides an overview of methods used to estimate RPS compliance costs and benefits, based on available data and estimates issued by utilities and regulators. Over the 2010-2012 period, average incremental RPS compliance costs in the United States were equivalent to 0.8% of retail electricity rates, although substantial variation exists around this average, both from year-to-year and across states. The methods used by utilities and regulators to estimate incremental compliance costs vary considerably from state to state and a number of states are currently engaged in processes to refine and standardize their approaches to RPS cost calculation. The report finds that state assessments of RPS benefits have most commonly attempted to quantitatively assess avoided emissions and human health benefits, economic development impacts, and wholesale electricity price savings. Compared to the summary of RPS costs, the summary of RPS benefits is more limited, as relatively few states have undertaken detailed benefits estimates, and then only for a few types of potential policy impacts. In some cases, the same impacts may be captured in the assessment of incremental costs. For these reasons, and because methodologies and level of rigor vary widely, direct comparisons between the estimates of benefits and costs are challenging.

  18. Older adults' beliefs about physician-estimated life expectancy: a cross-sectional survey

    Directory of Open Access Journals (Sweden)

    Bynum Debra L

    2006-02-01

    Full Text Available Abstract Background Estimates of life expectancy assist physicians and patients in medical decision-making. The time-delayed benefits for many medical treatments make an older adult's life expectancy estimate particularly important for physicians. The purpose of this study is to assess older adults' beliefs about physician-estimated life expectancy. Methods We performed a mixed qualitative-quantitative cross-sectional study in which 116 healthy adults aged 70+ were recruited from two local retirement communities. We interviewed them regarding their beliefs about physician-estimated life expectancy in the context of a larger study on cancer screening beliefs. Semi-structured interviews of 80 minutes average duration were performed in private locations convenient to participants. Demographic characteristics as well as cancer screening beliefs and beliefs about life expectancy were measured. Two independent researchers reviewed the open-ended responses and recorded the most common themes. The research team resolved disagreements by consensus. Results This article reports the life-expectancy results portion of the larger study. The study group (n = 116 was comprised of healthy, well-educated older adults, with almost a third over 85 years old, and none meeting criteria for dementia. Sixty-four percent (n = 73 felt that their physicians could not correctly estimate their life expectancy. Sixty-six percent (n = 75 wanted their physicians to talk with them about their life expectancy. The themes that emerged from our study indicate that discussions of life expectancy could help older adults plan for the future, maintain open communication with their physicians, and provide them knowledge about their medical conditions. Conclusion The majority of the healthy older adults in this study were open to discussions about life expectancy in the context of discussing cancer screening tests, despite awareness that their physicians' estimates could be inaccurate

  19. Diagnosis, prevalence estimation and burden measurement in population surveys of headache: presenting the HARDSHIP questionnaire

    OpenAIRE

    Steiner, Timothy J; Gururaj, Gopalakrishna; Andrée, Colette; Katsarava, Zaza; Ayzenberg, Ilya; Yu, Sheng-Yuan; Al Jumah, Mohammed; Tekle-Haimanot, Redda; Birbeck, Gretchen L; Herekar, Arif; Linde, Mattias; Mbewe, Edouard; Manandhar, Kedar; Risal, Ajay; Jensen, Rigmor

    2014-01-01

    The global burden of headache is very large, but knowledge of it is far from complete and needs still to be gathered. Published population-based studies have used variable methodology, which has influenced findings and made comparisons difficult. The Global Campaign against Headache is undertaking initiatives to improve and standardize methods in use for cross-sectional studies. One requirement is for a survey instrument with proven cross-cultural validity. This report describes the developme...

  20. Estimating usual intakes mainly affects the micronutrient distribution among infants, toddlers and pre-schoolers from the 2012 Mexican National Health and Nutrition Survey.

    Science.gov (United States)

    Piernas, Carmen; Miles, Donna R; Deming, Denise M; Reidy, Kathleen C; Popkin, Barry M

    2016-04-01

    To compare estimates from one day with usual intake estimates to evaluate how the adjustment for within-person variability affected nutrient intake and adequacy in Mexican children. In order to obtain usual nutrient intakes, the National Cancer Institute's method was used to correct the first 24 h dietary recall collected in the entire sample (n 2045) with a second 24 h recall collected in a sub-sample (n 178). We computed estimates of one-day and usual intakes of total energy, fat, Fe, Zn and Na. 2012 Mexican National Health and Nutrition Survey. A total of 2045 children were included: 0-5·9 months old (n 182), 6-11·9 months old (n 228), 12-23·9 months old (n 537) and 24-47·9 months old (n 1098). From these, 178 provided an additional dietary recall. Although we found small or no differences in energy intake (kJ/d and kcal/d) between one-day v. usual intake means, the prevalence of inadequate and excessive energy intake decreased somewhat when using measures of usual intake relative to one day. Mean fat intake (g/d) was not different between one-day and usual intake among children >6 months old, but the prevalence of inadequate and excessive fat intake was overestimated among toddlers and pre-schoolers when using one-day intake (P6 months. There was overall low variability in energy and fat intakes but higher for micronutrients. Because the usual intake distributions are narrower, the prevalence of inadequate/excessive intakes may be biased when estimating nutrient adequacy if one day of data is used.

  1. The CSIRO Healthy Diet Score: An Online Survey to Estimate Compliance with the Australian Dietary Guidelines

    Directory of Open Access Journals (Sweden)

    Gilly A. Hendrie

    2017-01-01

    Full Text Available There are few dietary assessment tools that are scientifically developed and freely available online. The Commonwealth Scientific and Industrial Research Organisation (CSIRO Healthy Diet Score survey asks questions about the quantity, quality, and variety of foods consumed. On completion, individuals receive a personalised Diet Score—reflecting their overall compliance with the Australian Dietary Guidelines. Over 145,000 Australians have completed the survey since it was launched in May 2015. The average Diet Score was 58.8 out of a possible 100 (SD = 12.9. Women scored higher than men; older adults higher than younger adults; and normal weight adults higher than obese adults. It was most common to receive feedback about discretionary foods (73.8% of the sample, followed by dairy foods (55.5% and healthy fats (47.0%. Results suggest that Australians’ diets are not consistent with the recommendations in the guidelines. The combination of using technology and providing the tool free of charge has attracted a lot of traffic to the website, providing valuable insights into what Australians’ report to be eating. The use of technology has also enhanced the user experience, with individuals receiving immediate and personalised feedback. This survey tool will be useful to monitor population diet quality and understand the degree to Australians’ diets comply with dietary guidelines.

  2. The CSIRO Healthy Diet Score: An Online Survey to Estimate Compliance with the Australian Dietary Guidelines.

    Science.gov (United States)

    Hendrie, Gilly A; Baird, Danielle; Golley, Rebecca K; Noakes, Manny

    2017-01-09

    There are few dietary assessment tools that are scientifically developed and freely available online. The Commonwealth Scientific and Industrial Research Organisation (CSIRO) Healthy Diet Score survey asks questions about the quantity, quality, and variety of foods consumed. On completion, individuals receive a personalised Diet Score-reflecting their overall compliance with the Australian Dietary Guidelines. Over 145,000 Australians have completed the survey since it was launched in May 2015. The average Diet Score was 58.8 out of a possible 100 (SD = 12.9). Women scored higher than men; older adults higher than younger adults; and normal weight adults higher than obese adults. It was most common to receive feedback about discretionary foods (73.8% of the sample), followed by dairy foods (55.5%) and healthy fats (47.0%). Results suggest that Australians' diets are not consistent with the recommendations in the guidelines. The combination of using technology and providing the tool free of charge has attracted a lot of traffic to the website, providing valuable insights into what Australians' report to be eating. The use of technology has also enhanced the user experience, with individuals receiving immediate and personalised feedback. This survey tool will be useful to monitor population diet quality and understand the degree to Australians' diets comply with dietary guidelines.

  3. Site-specific estimates of water yield applied in regional acid sensitivity surveys across western Canada

    Directory of Open Access Journals (Sweden)

    Patrick D. SHAW

    2010-08-01

    Full Text Available Runoff or water yield is an important input to the Steady-State Water Chemistry (SSWC model for estimating critical loads of acidity. Herein, we present site-specific water yield estimates for a large number of lakes (779 across three provinces of western Canada (Manitoba, Saskatchewan, and British Columbia using an isotope mass balance (IMB approach. We explore the impact of applying site-specific hydrology as compared to use of regional runoff estimates derived from gridded datasets in assessing critical loads of acidity to these lakes. In general, the average water yield derived from IMB is similar to the long-term average runoff; however, IMB results suggest a much larger range in hydrological settings of the lakes, attributed to spatial heterogeneity in watershed characteristics and landcover. The comparison of critical loads estimates from the two methods suggests that use of average regional runoff data in the SSWC model may overestimate critical loads for the majority of lakes due to systematic skewness in the actual runoff distributions. Implications for use of site-specific hydrology in regional critical loads assessments across western Canada are discussed.

  4. Estimating adolescent risk for hearing loss based on data from a large school-based survey

    NARCIS (Netherlands)

    Vogel, L.; Verschuure, H.; Ploeg, C.P.B. van der; Brug, J.; Raat, H.

    2010-01-01

    Objectives. We estimated whether and to what extent a group of adolescents were at risk of developing permanent hearing loss as a result of voluntary exposure to high-volume music, and we assessed whether such exposure was associated with hearing-related symptoms. Methods. In 2007, 1512 adolescents

  5. Estimating adolescent risk for hearing loss based on data from a large school-based survey

    NARCIS (Netherlands)

    I. Vogel (Ineke); H. Verschuure (Hans); C.P.B. van der Ploeg (Catharina); J. Brug (Hans); H. Raat (Hein)

    2010-01-01

    textabstractObjectives. We estimated whether and to what extent a group of adolescents were at risk of developing permanent hearing loss as a result of voluntary exposure to high-volume music, and we assessed whether such exposure was associated with hearing-related symptoms. Methods. In 2007, 1512

  6. Education and Synthetic Work-Life Earnings Estimates. American Community Survey Reports. ACS-14

    Science.gov (United States)

    Julian, Tiffany; Kominski, Robert

    2011-01-01

    The relationship between education and earnings is a long-analyzed topic of study. Generally, there is a strong belief that achievement of higher levels of education is a well established path to better jobs and better earnings. This report provides one view of the economic value of educational attainment by producing an estimate of the amount of…

  7. MEAN OF MEDIAN ABSOLUTE DERIVATION TECHNIQUE MEAN ...

    African Journals Online (AJOL)

    eobe

    development of mean of median absolute derivation technique based on the based on the based on .... of noise mean to estimate the speckle noise variance. Noise mean property ..... Foraging Optimization,” International Journal of. Advanced ...

  8. Estimating the Incidence of Acute Infectious Intestinal Disease in the Community in the UK: A Retrospective Telephone Survey.

    Directory of Open Access Journals (Sweden)

    Laura Viviani

    Full Text Available To estimate the burden of intestinal infectious disease (IID in the UK and determine whether disease burden estimations using a retrospective study design differ from those using a prospective study design.A retrospective telephone survey undertaken in each of the four countries comprising the United Kingdom. Participants were randomly asked about illness either in the past 7 or 28 days.14,813 individuals for all of whom we had a legible recording of their agreement to participate.Self-reported IID, defined as loose stools or clinically significant vomiting lasting less than two weeks, in the absence of a known non-infectious cause.The rate of self-reported IID varied substantially depending on whether asked for illness in the previous 7 or 28 days. After standardising for age and sex, and adjusting for the number of interviews completed each month and the relative size of each UK country, the estimated rate of IID in the 7-day recall group was 1,530 cases per 1,000 person-years (95% CI: 1135-2113, while in the 28-day recall group it was 533 cases per 1,000 person-years (95% CI: 377-778. There was no significant variation in rates between the four countries. Rates in this study were also higher than in a related prospective study undertaken at the same time.The estimated burden of disease from IID varied dramatically depending on study design. Retrospective studies of IID give higher estimates of disease burden than prospective studies. Of retrospective studies longer recall periods give lower estimated rates than studies with short recall periods. Caution needs to be exercised when comparing studies of self-reported IID as small changes in study design or case definition can markedly affect estimated rates.

  9. Chapter 12: Survey Design and Implementation for Estimating Gross Savings Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Baumgartner, Robert [Tetra Tech, Madison, WI (United States)

    2017-10-05

    This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savings from energy efficiency programs.

  10. State-of-charge inconsistency estimation of lithium-ion battery pack using mean-difference model and extended Kalman filter

    Science.gov (United States)

    Zheng, Yuejiu; Gao, Wenkai; Ouyang, Minggao; Lu, Languang; Zhou, Long; Han, Xuebing

    2018-04-01

    State-of-charge (SOC) inconsistency impacts the power, durability and safety of the battery pack. Therefore, it is necessary to measure the SOC inconsistency of the battery pack with good accuracy. We explore a novel method for modeling and estimating the SOC inconsistency of lithium-ion (Li-ion) battery pack with low computation effort. In this method, a second-order RC model is selected as the cell mean model (CMM) to represent the overall performance of the battery pack. A hypothetical Rint model is employed as the cell difference model (CDM) to evaluate the SOC difference. The parameters of mean-difference model (MDM) are identified with particle swarm optimization (PSO). Subsequently, the mean SOC and the cell SOC differences are estimated by using extended Kalman filter (EKF). Finally, we conduct an experiment on a small Li-ion battery pack with twelve cells connected in series. The results show that the evaluated SOC difference is capable of tracking the changing of actual value after a quick convergence.

  11. Estimating leptospirosis incidence using hospital-based surveillance and a population-based health care utilization survey in Tanzania.

    Directory of Open Access Journals (Sweden)

    Holly M Biggs

    Full Text Available The incidence of leptospirosis, a neglected zoonotic disease, is uncertain in Tanzania and much of sub-Saharan Africa, resulting in scarce data on which to prioritize resources for public health interventions and disease control. In this study, we estimate the incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania.We conducted a population-based household health care utilization survey in two districts in the Kilimanjaro Region of Tanzania and identified leptospirosis cases at two hospital-based fever sentinel surveillance sites in the Kilimanjaro Region. We used multipliers derived from the health care utilization survey and case numbers from hospital-based surveillance to calculate the incidence of leptospirosis. A total of 810 households were enrolled in the health care utilization survey and multipliers were derived based on responses to questions about health care seeking in the event of febrile illness. Of patients enrolled in fever surveillance over a 1 year period and residing in the 2 districts, 42 (7.14% of 588 met the case definition for confirmed or probable leptospirosis. After applying multipliers to account for hospital selection, test sensitivity, and study enrollment, we estimated the overall incidence of leptospirosis ranges from 75-102 cases per 100,000 persons annually.We calculated a high incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania, where leptospirosis incidence was previously unknown. Multiplier methods, such as used in this study, may be a feasible method of improving availability of incidence estimates for neglected diseases, such as leptospirosis, in resource constrained settings.

  12. Estimating Leptospirosis Incidence Using Hospital-Based Surveillance and a Population-Based Health Care Utilization Survey in Tanzania

    Science.gov (United States)

    Biggs, Holly M.; Hertz, Julian T.; Munishi, O. Michael; Galloway, Renee L.; Marks, Florian; Saganda, Wilbrod; Maro, Venance P.; Crump, John A.

    2013-01-01

    Background The incidence of leptospirosis, a neglected zoonotic disease, is uncertain in Tanzania and much of sub-Saharan Africa, resulting in scarce data on which to prioritize resources for public health interventions and disease control. In this study, we estimate the incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania. Methodology/Principal Findings We conducted a population-based household health care utilization survey in two districts in the Kilimanjaro Region of Tanzania and identified leptospirosis cases at two hospital-based fever sentinel surveillance sites in the Kilimanjaro Region. We used multipliers derived from the health care utilization survey and case numbers from hospital-based surveillance to calculate the incidence of leptospirosis. A total of 810 households were enrolled in the health care utilization survey and multipliers were derived based on responses to questions about health care seeking in the event of febrile illness. Of patients enrolled in fever surveillance over a 1 year period and residing in the 2 districts, 42 (7.14%) of 588 met the case definition for confirmed or probable leptospirosis. After applying multipliers to account for hospital selection, test sensitivity, and study enrollment, we estimated the overall incidence of leptospirosis ranges from 75–102 cases per 100,000 persons annually. Conclusions/Significance We calculated a high incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania, where leptospirosis incidence was previously unknown. Multiplier methods, such as used in this study, may be a feasible method of improving availability of incidence estimates for neglected diseases, such as leptospirosis, in resource constrained settings. PMID:24340122

  13. The Meth Project and Teen Meth Use: New Estimates from the National and State Youth Risk Behavior Surveys.

    Science.gov (United States)

    Anderson, D Mark; Elsea, David

    2015-12-01

    In this note, we use data from the national and state Youth Risk Behavior Surveys for the period 1999 through 2011 to estimate the relationship between the Meth Project, an anti-methamphetamine advertising campaign, and meth use among high school students. During this period, a total of eight states adopted anti-meth advertising campaigns. After accounting for pre-existing downward trends in meth use, we find little evidence that the campaign curbed meth use in the full sample. We do find, however, some evidence that the Meth Project may have decreased meth use among White high school students. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Wind estimation around the shipwreck of Oriental Star based on field damage surveys and radar observations

    OpenAIRE

    Meng, Zhiyong; Yao, Dan; Bai, Lanqiang; Zheng, Yongguang; Xue, Ming; Zhang, Xiaoling; Zhao, Kun; Tian, Fuyou; Wang, Mingjun

    2016-01-01

    Based on observational analyses and on-site ground and aerial damage surveys, this work aims to reveal the weather phenomena?especially the wind situation?when Oriental Star capsized in the Yangtze River on June 1, 2015. Results demonstrate that the cruise ship capsized when it encountered strong winds at speeds of at least 31?m?s?1 near the apex of a bow echo embedded in a squall line. As suggested by the fallen trees within a 2-km radius around the wreck location, such strong winds were lik...

  15. Wind estimation around the shipwreck of Oriental Star based on field damage surveys and radar observations.

    Science.gov (United States)

    Meng, Zhiyong; Yao, Dan; Bai, Lanqiang; Zheng, Yongguang; Xue, Ming; Zhang, Xiaoling; Zhao, Kun; Tian, Fuyou; Wang, Mingjun

    Based on observational analyses and on-site ground and aerial damage surveys, this work aims to reveal the weather phenomena-especially the wind situation-when Oriental Star capsized in the Yangtze River on June 1, 2015. Results demonstrate that the cruise ship capsized when it encountered strong winds at speeds of at least 31 m s -1 near the apex of a bow echo embedded in a squall line. As suggested by the fallen trees within a 2-km radius around the wreck location, such strong winds were likely caused by microburst straight-line wind and/or embedded small vortices, rather than tornadoes.

  16. Estimation of miniature forest parameters, species, tree shape, and distance between canopies by means of Monte-Carlo based radiative transfer model with forestry surface model

    International Nuclear Information System (INIS)

    Ding, Y.; Arai, K.

    2007-01-01

    A method for estimation of forest parameters, species, tree shape, distance between canopies by means of Monte-Carlo based radiative transfer model with forestry surface model is proposed. The model is verified through experiments with the miniature model of forest, tree array of relatively small size of trees. Two types of miniature trees, ellipse-looking and cone-looking canopy are examined in the experiments. It is found that the proposed model and experimental results show a coincidence so that the proposed method is validated. It is also found that estimation of tree shape, trunk tree distance as well as distinction between deciduous or coniferous trees can be done with the proposed model. Furthermore, influences due to multiple reflections between trees and interaction between trees and under-laying grass are clarified with the proposed method

  17. Erratum: Hansen, Lund, Sangill, and Jespersen. Experimentally and Computationally Fast Method for Estimation of a Mean Kurtosis. Magnetic Resonance in Medicine 69:1754–1760 (2013)

    DEFF Research Database (Denmark)

    Hansen, Brian; Lund, Torben Ellegaard; Sangill, Ryan

    2014-01-01

    PURPOSE: Results from several recent studies suggest the magnetic resonance diffusion-derived metric mean kurtosis (MK) to be a sensitive marker for tissue pathology; however, lengthy acquisition and postprocessing time hamper further exploration. The purpose of this study is to introduce...... and evaluate a new MK metric and a rapid protocol for its estimation. METHODS: The protocol requires acquisition of 13 standard diffusion-weighted images, followed by linear combination of log diffusion signals, thus avoiding nonlinear optimization. The method was evaluated on an ex vivo rat brain...... for full human brain coverage, with a postprocessing time of a few seconds. Scan-rescan reproducibility was comparable with MK. CONCLUSION: The framework offers a robust and rapid method for estimating MK, with a protocol easily adapted on commercial scanners, as it requires only minimal modification...

  18. National-scale crop type mapping and area estimation using multi-resolution remote sensing and field survey

    Science.gov (United States)

    Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M.

    2016-12-01

    Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in

  19. Estimating Classification Errors Under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC

    Directory of Open Access Journals (Sweden)

    Boeschoten Laura

    2017-12-01

    Full Text Available Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible combinations with scores on other variables. Furthermore, the latent class model, by multiply imputing a new variable, enhances the quality of statistics based on the composite data set. The performance of this method is investigated by a simulation study, which shows that whether or not the method can be applied depends on the entropy R2 of the latent class model and the type of analysis a researcher is planning to do. Finally, the method is applied to public data from Statistics Netherlands.

  20. Obtaining numerically consistent estimates from a mix of administrative data and surveys

    OpenAIRE

    de Waal, A.G.

    2016-01-01

    National statistical institutes (NSIs) fulfil an important role as providers of objective and undisputed statistical information on many different aspects of society. To this end NSIs try to construct data sets that are rich in information content and that can be used to estimate a large variety of population figures. At the same time NSIs aim to construct these rich data sets as efficiently and cost effectively as possible. This can be achieved by utilizing already available administrative d...

  1. Estimation of forest resources from a country wide laser scanning survey and national forest inventory data

    DEFF Research Database (Denmark)

    Nord-Larsen, Thomas; Schumacher, Johannes

    2012-01-01

    Airborne laser scanning may provide a means for assessing local forest biomass resources. In this study, national forest inventory (NFI) data was used as reference data for modeling forest basal area, volume, aboveground biomass, and total biomass from laser scanning data obtained in a countrywid...

  2. Influence of Mean Rooftop-Level Estimation Method on Sensible Heat Flux Retrieved from a Large-Aperture Scintillometer Over a City Centre

    Science.gov (United States)

    Zieliński, Mariusz; Fortuniak, Krzysztof; Pawlak, Włodzimierz; Siedlecki, Mariusz

    2017-08-01

    The sensible heat flux ( H) is determined using large-aperture scintillometer (LAS) measurements over a city centre for eight different computation scenarios. The scenarios are based on different approaches of the mean rooftop-level (zH) estimation for the LAS path. Here, zH is determined separately for wind directions perpendicular (two zones) and parallel (one zone) to the optical beam to reflect the variation in topography and building height on both sides of the LAS path. Two methods of zH estimation are analyzed: (1) average building profiles; (2) weighted-average building height within a 250 m radius from points located every 50 m along the optical beam, or the centre of a certain zone (in the case of a wind direction perpendicular to the path). The sensible heat flux is computed separately using the friction velocity determined with the eddy-covariance method and the iterative procedure. The sensitivity of the sensible heat flux and the extent of the scintillometer source area to different computation scenarios are analyzed. Differences reaching up to 7% between heat fluxes computed with different scenarios were found. The mean rooftop-level estimation method has a smaller influence on the sensible heat flux (-4 to 5%) than the area used for the zH computation (-5 to 7%). For the source-area extent, the discrepancies between respective scenarios reached a similar magnitude. The results demonstrate the value of the approach in which zH is estimated separately for wind directions parallel and perpendicular to the LAS optical beam.

  3. A flexible and coherent test/estimation procedure based on restricted mean survival times for censored time-to-event data in randomized clinical trials.

    Science.gov (United States)

    Horiguchi, Miki; Cronin, Angel M; Takeuchi, Masahiro; Uno, Hajime

    2018-04-22

    In randomized clinical trials where time-to-event is the primary outcome, almost routinely, the logrank test is prespecified as the primary test and the hazard ratio is used to quantify treatment effect. If the ratio of 2 hazard functions is not constant, the logrank test is not optimal and the interpretation of hazard ratio is not obvious. When such a nonproportional hazards case is expected at the design stage, the conventional practice is to prespecify another member of weighted logrank tests, eg, Peto-Prentice-Wilcoxon test. Alternatively, one may specify a robust test as the primary test, which can capture various patterns of difference between 2 event time distributions. However, most of those tests do not have companion procedures to quantify the treatment difference, and investigators have fallen back on reporting treatment effect estimates not associated with the primary test. Such incoherence in the "test/estimation" procedure may potentially mislead clinicians/patients who have to balance risk-benefit for treatment decision. To address this, we propose a flexible and coherent test/estimation procedure based on restricted mean survival time, where the truncation time τ is selected data dependently. The proposed procedure is composed of a prespecified test and an estimation of corresponding robust and interpretable quantitative treatment effect. The utility of the new procedure is demonstrated by numerical studies based on 2 randomized cancer clinical trials; the test is dramatically more powerful than the logrank, Wilcoxon tests, and the restricted mean survival time-based test with a fixed τ, for the patterns of difference seen in these cancer clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Using open-ended data to enrich survey results on the meanings of self-rated health: a study among women in underprivileged communities in Beirut, Lebanon.

    Science.gov (United States)

    Salem, Mylene Tewtel; Abdulrahim, Sawsan; Zurayk, Huda

    2009-12-01

    This study extends the debate on self-rated health by using different sources of data in the same study to explore the meanings of self-rated health among women who live in socio-economically disadvantaged communities in Beirut, Lebanon. Using data from the Urban Health Study, a cross-sectional household survey of 1,869 women between 15 and 59 years of age, multiple logistic regression models were developed to assess factors associated with self-rated health. Also, open-ended data was used to analyze women's explanations of their self-rated health ratings. Self-rated health was found to be a complex concept, associated not only with physical health but also with a combination of social, psychological, and behavioral factors. This open-ended analysis revealed new meanings of self-rated health that are often not included in self-rated health epidemiologic research, such as women's experiences with pain and fatigue, as well as exposure to financial stressors and the legacy of wars. We argue that triangulating survey and open-ended data provides a better understanding of the context-specific social and cultural meanings of self-rated health.

  5. Velocity Segregation and Systematic Biases In Velocity Dispersion Estimates with the SPT-GMOS Spectroscopic Survey

    Science.gov (United States)

    Bayliss, Matthew. B.; Zengo, Kyle; Ruel, Jonathan; Benson, Bradford A.; Bleem, Lindsey E.; Bocquet, Sebastian; Bulbul, Esra; Brodwin, Mark; Capasso, Raffaella; Chiu, I.-non; McDonald, Michael; Rapetti, David; Saro, Alex; Stalder, Brian; Stark, Antony A.; Strazzullo, Veronica; Stubbs, Christopher W.; Zenteno, Alfredo

    2017-03-01

    The velocity distribution of galaxies in clusters is not universal; rather, galaxies are segregated according to their spectral type and relative luminosity. We examine the velocity distributions of different populations of galaxies within 89 Sunyaev Zel’dovich (SZ) selected galaxy clusters spanning 0.28GMOS spectroscopic survey, supplemented by additional published spectroscopy, resulting in a final spectroscopic sample of 4148 galaxy spectra—2868 cluster members. The velocity dispersion of star-forming cluster galaxies is 17 ± 4% greater than that of passive cluster galaxies, and the velocity dispersion of bright (m< {m}* -0.5) cluster galaxies is 11 ± 4% lower than the velocity dispersion of our total member population. We find good agreement with simulations regarding the shape of the relationship between the measured velocity dispersion and the fraction of passive versus star-forming galaxies used to measure it, but we find a small offset between this relationship as measured in data and simulations, which suggests that our dispersions are systematically low by as much as 3% relative to simulations. We argue that this offset could be interpreted as a measurement of the effective velocity bias that describes the ratio of our observed velocity dispersions and the intrinsic velocity dispersion of dark matter particles in a published simulation result. Measuring velocity bias in this way suggests that large spectroscopic surveys can improve dispersion-based mass-observable scaling relations for cosmology even in the face of velocity biases, by quantifying and ultimately calibrating them out.

  6. Estimating costs of pressure area management based on a survey of ulcer care in one Irish hospital.

    Science.gov (United States)

    Gethin, G; Jordan-O'Brien, J; Moore, Z

    2005-04-01

    Pressure ulceration remains a significant cause of morbidity for patients and has a real economic impact on the health sector. Studies to date have estimated the cost of management but have not always given a breakdown of how these figures were calculated. There are no published studies that have estimated the cost of management of pressure ulcers in Ireland. A two-part study was therefore undertaken. Part one determined the prevalence of pressure ulcers in a 626-bed Irish acute hospital. Part two set out to derive a best estimate of the cost of managing pressure ulcers in Ireland. The European Pressure UlcerAdvisory Panel (EPUAP) minimum data set tool was used to complete the prevalence survey. Tissue viability nurses trained in the data-collection tool collected the data. A cost was obtained for all items of care for the management of one patient with three grade IV pressure ulcers over a five-month period. Of the patients, 2.5% had pressure ulcers. It cost Euros 119,000 to successfully treat one patient. We estimate that it costs Euros 250,000,000 per annum to manage pressure ulcers across all care settings in Ireland.

  7. Changes in mean serum lipids among adults in Germany: results from National Health Surveys 1997-99 and 2008-11

    Directory of Open Access Journals (Sweden)

    Julia Truthmann

    2016-03-01

    Full Text Available Abstract Background Monitoring of serum lipid concentrations at the population level is an important public health tool to describe progress in cardiovascular disease risk control and prevention. Using data from two nationally representative health surveys of adults 18–79 years, this study identified changes in mean serum total cholesterol (TC, high-density lipoprotein cholesterol (HDL-C, and triglycerides (TG in relation to changes in potential determinants of serum lipids between 1997–99 and 2008–11 in Germany. Methods Sex-specific multivariable linear regression analyses were performed with serum lipids as dependent variables and survey wave as independent variable and adjusted for the following covariables: age, fasting duration, educational status, lifestyle, and use of medication. Results Mean TC declined between the two survey periods by 13 % (5.97 mmol/l vs. 5.19 mmol/l among men and by 12 % (6.03 mmol/l vs. 5.30 mmol/l among women. Geometric mean TG decreased by 14 % (1.66 mmol/l vs. 1.42 mmol/l among men and by 8 % (1.20 mmol/l vs. 1.10 mmol/l among women. Mean HDL-C remained unchanged among men (1.29 mmol/l vs. 1.27 mmol/l, but decreased by 5 % among women (1.66 mmol/l vs. 1.58 mmol/l. Sports activity and coffee consumption increased, while smoking and high alcohol consumption decreased only in men. Processed food consumption increased and wholegrain bread consumption decreased in both sexes, and obesity increased among men. The use of lipid-lowering medication, in particular statins nearly doubled over time in both sexes. Among women, hormonal contraceptive use increased and postmenopausal hormone therapy halved over time. The changes in lipid levels between surveys remained significant after adjusting for covariables. Conclusion Serum TC and TG considerably declined over one decade in Germany, which can be partly explained by increased use of lipid-lowering medication and improved lifestyle among men. The

  8. Survey of CT practice in Japan and collective effective dose estimation

    International Nuclear Information System (INIS)

    Nishizawa, Kanae; Maruyama, Takashi; Matsumoto, Masaki; Iwai, Kazuo

    2004-01-01

    Computed tomography (CT) has been established as an important diagnostic tool in clinical medicine and has become a major source of medical exposure. A nationwide survey regarding CT examinations was carried out in Japan in 2000. CT units per million people in Japan numbered 87.8. The annual number of examinations was 0.1 million in those 0-14 years old, 3.54 million for those 15 years old and above, and 3.65 million in total. Eighty percent of examinations for those 0-14 years old were examinations of the head, as were 40% for those 15 years old and above. The number of examinations per 1000 population was 290. The collective effective dose was 295 x 10 3 person·Sv, and the effective dose per caput was evaluated as 2.3 mSv. (author)

  9. Hospital and clinic survey estimates of medical X-ray exposure in Hiroshima and Nagasaki, 2

    International Nuclear Information System (INIS)

    Antoku, Shigetoshi; Hoshi, Masaharu; Sawada, Shozo; Russell, W.J.

    1987-07-01

    The technical factors used during radiological examinations performed in Hiroshima and Nagasaki medical institutions were analyzed. The most frequently performed examination was chest radiography, followed by upper GI series. More than half the radiographic exposures were from upper GI series due to the many spot films made during fluoroscopy. Comparison of the present survey results with those of a previous one showed that relatively high kVp, low mAs and mA, and smaller field sizes are now more widely used. Though there have been decreased in fluoroscopy times and tube currents over the past 10 years, the numbers of spot films used have increased. Based on these technical factors, tables of organ doses from fluoroscopic examinations were compiled. (author)

  10. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation

    OpenAIRE

    McCall, J C; Trivedi, Mohan Manubhai

    2006-01-01

    Driver-assistance systems that monitor driver intent, warn drivers of lane departures, or assist in vehicle guidance are all being actively considered. It is therefore important to take a critical look at key aspects of these systems, one of which is lane-position tracking. It is for these driver-assistance objectives that motivate the development of the novel "video-based lane estimation and tracking" (VioLET) system. The system is designed using steerable filters for robust and accurate lan...

  11. Age- and gender-specific estimates of partnership formation and dissolution rates in the Seattle sex survey.

    Science.gov (United States)

    Nelson, Sara J; Hughes, James P; Foxman, Betsy; Aral, Sevgi O; Holmes, King K; White, Peter J; Golden, Matthew R

    2010-04-01

    Partnership formation and dissolution rates are primary determinants of sexually transmitted infection (STI) transmission dynamics. The authors used data on persons' lifetime sexual experiences from a 2003-2004 random digit dialing survey of Seattle residents aged 18-39 years (N=1,194) to estimate age- and gender-specific partnership formation and dissolution rates. Partnership start and end dates were used to estimate participants' ages at the start of each partnership and partnership durations, and partnerships not enumerated in the survey were imputed. Partnership formation peaked at age 19 at 0.9 (95% confidence interval [CI]: 0.76-1.04) partnerships per year and decreased to 0.1 to 0.2 after age 30 for women and peaked at age 20 at 1.4 (95% CI: 1.08-1.64) and declined to 0.5 after age 30 for men. Nearly one fourth (23.7%) of partnerships ended within 1 week and more than one half (51.2%) ended within 12 weeks. Most (63.5%) individuals 30 to 39 years of age had not formed a new sexual partnership in the past 3 years. A large proportion of the heterosexual population is no longer at substantial STI risk by their early 30s, but similar analyses among high-risk populations may give insight into reasons for the profound disparities in STI rates across populations. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  12. Estimating cetacean density and abundance in the Central and Western Mediterranean Sea through aerial surveys: Implications for management

    Science.gov (United States)

    Panigada, Simone; Lauriano, Giancarlo; Donovan, Greg; Pierantonio, Nino; Cañadas, Ana; Vázquez, José Antonio; Burt, Louise

    2017-07-01

    Systematic, effective monitoring of animal population parameters underpins successful conservation strategy and wildlife management, but it is often neglected in many regions, including much of the Mediterranean Sea. Nonetheless, a series of systematic multispecies aerial surveys was carried out in the seas around Italy to gather important baseline information on cetacean occurrence, distribution and abundance. The monitored areas included the Pelagos Sanctuary, the Tyrrhenian Sea, portions of the Seas of Corsica and Sardinia, the Ionian Seas as well as the Gulf of Taranto. Overall, approximately 48,000 km were flown in either spring, summer and winter between 2009-2014, covering an area of 444,621 km2. The most commonly observed species were the striped dolphin and the fin whale, with 975 and 83 recorded sightings, respectively. Other sighted cetacean species were the common bottlenose dolphin, the Risso's dolphin, the sperm whale, the pilot whale and the Cuvier's beaked whale. Uncorrected model- and design-based estimates of density and abundance for striped dolphins and fin whales were produced, resulting in a best estimate (model-based) of around 95,000 striped dolphins (CV=11.6%; 95% CI=92,900-120,300) occurring in the Pelagos Sanctuary, Central Tyrrhenian and Western Seas of Corsica and Sardinia combined area in summer 2010. Estimates were also obtained for each individual study region and year. An initial attempt to estimate perception bias for striped dolphins is also provided. The preferred summer 2010 uncorrected best estimate (design-based) for the same areas for fin whales was around 665 (CV=33.1%; 95% CI=350-1260). Estimates are also provided for the individual study regions and years. The results represent baseline data to develop efficient, long-term, systematic monitoring programmes, essential to evaluate trends, as required by a number of national and international frameworks, and stress the need to ensure that surveys are undertaken regularly and

  13. Estimating adolescent risk for hearing loss based on data from a large school-based survey.

    Science.gov (United States)

    Vogel, Ineke; Verschuure, Hans; van der Ploeg, Catharina P B; Brug, Johannes; Raat, Hein

    2010-06-01

    We estimated whether and to what extent a group of adolescents were at risk of developing permanent hearing loss as a result of voluntary exposure to high-volume music, and we assessed whether such exposure was associated with hearing-related symptoms. In 2007, 1512 adolescents (aged 12-19 years) in Dutch secondary schools completed questionnaires about their music-listening behavior and whether they experienced hearing-related symptoms after listening to high-volume music. We used their self-reported data in conjunction with published average sound levels of music players, discotheques, and pop concerts to estimate their noise exposure, and we compared that exposure to our own "loosened" (i.e., less strict) version of current European safety standards for occupational noise exposure. About half of the adolescents exceeded safety standards for occupational noise exposure. About one third of the respondents exceeded safety standards solely as a result of listening to MP3 players. Hearing symptoms that occurred after using an MP3 player or going to a discotheque were associated with exposure to high-volume music. Adolescents often exceeded current occupational safety standards for noise exposure, highlighting the need for specific safety standards for leisure-time noise exposure.

  14. Application of coating and base material living models to evaluate degradation and estimate the mean local operating temperature of two ex-service 1{sup st} stage blades

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, M. [Proing Italia, Torbole sul Garda, Trento (Italy); Rinaldi, C. [ERSE, Milan (Italy); Vacchieri, E. [Ansaldo Energia S.p.A., Genoa (Italy)

    2010-07-01

    In the frame of the collaborative program COST 538 a coating life prediction code was implemented by Proing and ERSE with an inverse problem solution routine able to calculate the local mean operating temperature from the operating conditions and the extension of the coating depleted regions. Moreover base material degradation models were developed by Ansaldo Energia on both equiaxed and single crystal superalloys. This paper describes the application of such methodologies to two ex-service 1st stage gas turbine blades delivered to COST 538 by AEN after operation in two different plants with different operating conditions. The objective of the study was the application and validation of an innovative NDT and the estimate of the mean operating temperature at different positions of the components. The destructive metallographic analysis of the blades let to validate the non destructive frequency scanning eddy current technique (F-SECT). Coating life modelling results are compared with those of the base material degradation models. An interesting correlation was found between the estimated temperatures with the two methods and also with the NDT findings at the most significant component positions. (orig.)

  15. Improved infrared precipitation estimation approaches based on k-means clustering: Application to north Algeria using MSG-SEVIRI satellite data

    Science.gov (United States)

    Mokdad, Fatiha; Haddad, Boualem

    2017-06-01

    In this paper, two new infrared precipitation estimation approaches based on the concept of k-means clustering are first proposed, named the NAW-Kmeans and the GPI-Kmeans methods. Then, they are adapted to the southern Mediterranean basin, where the subtropical climate prevails. The infrared data (10.8 μm channel) acquired by MSG-SEVIRI sensor in winter and spring 2012 are used. Tests are carried out in eight areas distributed over northern Algeria: Sebra, El Bordj, Chlef, Blida, Bordj Menael, Sidi Aich, Beni Ourthilane, and Beni Aziz. The validation is performed by a comparison of the estimated rainfalls to rain gauges observations collected by the National Office of Meteorology in Dar El Beida (Algeria). Despite the complexity of the subtropical climate, the obtained results indicate that the NAW-Kmeans and the GPI-Kmeans approaches gave satisfactory results for the considered rain rates. Also, the proposed schemes lead to improvement in precipitation estimation performance when compared to the original algorithms NAW (Nagri, Adler, and Wetzel) and GPI (GOES Precipitation Index).

  16. Analytical estimation of emission zone mean position and width in organic light-emitting diodes from emission pattern image-source interference fringes

    International Nuclear Information System (INIS)

    Epstein, Ariel; Tessler, Nir; Einziger, Pinchas D.; Roberts, Matthew

    2014-01-01

    We present an analytical method for evaluating the first and second moments of the effective exciton spatial distribution in organic light-emitting diodes (OLED) from measured emission patterns. Specifically, the suggested algorithm estimates the emission zone mean position and width, respectively, from two distinct features of the pattern produced by interference between the emission sources and their images (induced by the reflective cathode): the angles in which interference extrema are observed, and the prominence of interference fringes. The relations between these parameters are derived rigorously for a general OLED structure, indicating that extrema angles are related to the mean position of the radiating excitons via Bragg's condition, and the spatial broadening is related to the attenuation of the image-source interference prominence due to an averaging effect. The method is applied successfully both on simulated emission patterns and on experimental data, exhibiting a very good agreement with the results obtained by numerical techniques. We investigate the method performance in detail, showing that it is capable of producing accurate estimations for a wide range of source-cathode separation distances, provided that the measured spectral interval is large enough; guidelines for achieving reliable evaluations are deduced from these results as well. As opposed to numerical fitting tools employed to perform similar tasks to date, our approximate method explicitly utilizes physical intuition and requires far less computational effort (no fitting is involved). Hence, applications that do not require highly resolved estimations, e.g., preliminary design and production-line verification, can benefit substantially from the analytical algorithm, when applicable. This introduces a novel set of efficient tools for OLED engineering, highly important in the view of the crucial role the exciton distribution plays in determining the device performance.

  17. Analytical estimation of emission zone mean position and width in organic light-emitting diodes from emission pattern image-source interference fringes

    Energy Technology Data Exchange (ETDEWEB)

    Epstein, Ariel, E-mail: ariel.epstein@utoronto.ca; Tessler, Nir, E-mail: nir@ee.technion.ac.il; Einziger, Pinchas D. [Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Roberts, Matthew, E-mail: mroberts@cdtltd.co.uk [Cambridge Display Technology Ltd, Building 2020, Cambourne Business Park, Cambourne, Cambridgeshire CB23 6DW (United Kingdom)

    2014-06-14

    We present an analytical method for evaluating the first and second moments of the effective exciton spatial distribution in organic light-emitting diodes (OLED) from measured emission patterns. Specifically, the suggested algorithm estimates the emission zone mean position and width, respectively, from two distinct features of the pattern produced by interference between the emission sources and their images (induced by the reflective cathode): the angles in which interference extrema are observed, and the prominence of interference fringes. The relations between these parameters are derived rigorously for a general OLED structure, indicating that extrema angles are related to the mean position of the radiating excitons via Bragg's condition, and the spatial broadening is related to the attenuation of the image-source interference prominence due to an averaging effect. The method is applied successfully both on simulated emission patterns and on experimental data, exhibiting a very good agreement with the results obtained by numerical techniques. We investigate the method performance in detail, showing that it is capable of producing accurate estimations for a wide range of source-cathode separation distances, provided that the measured spectral interval is large enough; guidelines for achieving reliable evaluations are deduced from these results as well. As opposed to numerical fitting tools employed to perform similar tasks to date, our approximate method explicitly utilizes physical intuition and requires far less computational effort (no fitting is involved). Hence, applications that do not require highly resolved estimations, e.g., preliminary design and production-line verification, can benefit substantially from the analytical algorithm, when applicable. This introduces a novel set of efficient tools for OLED engineering, highly important in the view of the crucial role the exciton distribution plays in determining the device performance.

  18. Optimizing occupational exposure measurement strategies when estimating the log-scale arithmetic mean value--an example from the reinforced plastics industry.

    Science.gov (United States)

    Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A

    2006-06-01

    When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.

  19. Estimation of Groundwater Recharge in a Japanese Headwater Area by Intensive Collaboration of Field Survey and Modelling Work

    Science.gov (United States)

    Yano, S.; Kondo, H.; Tawara, Y.; Yamada, T.; Mori, K.; Yoshida, A.; Tada, K.; Tsujimura, M.; Tokunaga, T.

    2017-12-01

    It is important to understand groundwater systems, including their recharge, flow, storage, discharge, and withdrawal, so that we can use groundwater resources efficiently and sustainably. To examine groundwater recharge, several methods have been discussed based on water balance estimation, in situ experiments, and hydrological tracers. However, few studies have developed a concrete framework for quantifying groundwater recharge rates in an undefined area. In this study, we established a robust method to quantitatively determine water cycles and estimate the groundwater recharge rate by combining the advantages of field surveys and model simulations. We replicated in situ hydrogeological observations and three-dimensional modeling in a mountainous basin area in Japan. We adopted a general-purpose terrestrial fluid-flow simulator (GETFLOWS) to develop a geological model and simulate the local water cycle. Local data relating to topology, geology, vegetation, land use, climate, and water use were collected from the existing literature and observations to assess the spatiotemporal variations of the water balance from 2011 to 2013. The characteristic structures of geology and soils, as found through field surveys, were parameterized for incorporation into the model. The simulated results were validated using observed groundwater levels and resulted in a Nash-Sutcliffe Model Efficiency Coefficient of 0.92. The results suggested that local groundwater flows across the watershed boundary and that the groundwater recharge rate, defined as the flux of water reaching the local unconfined groundwater table, has values similar to the level estimated in the `the lower soil layers on a long-term basis. This innovative method enables us to quantify the groundwater recharge rate and its spatiotemporal variability with high accuracy, which contributes to establishing a foundation for sustainable groundwater management.

  20. Interpreting surveys to estimate the size of the monarch butterfly population: Pitfalls and prospects.

    Directory of Open Access Journals (Sweden)

    John M Pleasants

    Full Text Available To assess the change in the size of the eastern North American monarch butterfly summer population, studies have used long-term data sets of counts of adult butterflies or eggs per milkweed stem. Despite the observed decline in the monarch population as measured at overwintering sites in Mexico, these studies found no decline in summer counts in the Midwest, the core of the summer breeding range, leading to a suggestion that the cause of the monarch population decline is not the loss of Midwest agricultural milkweeds but increased mortality during the fall migration. Using these counts to estimate population size, however, does not account for the shift of monarch activity from agricultural fields to non-agricultural sites over the past 20 years, as a result of the loss of agricultural milkweeds due to the near-ubiquitous use of glyphosate herbicides. We present the counter-hypotheses that the proportion of the monarch population present in non-agricultural habitats, where counts are made, has increased and that counts reflect both population size and the proportion of the population observed. We use data on the historical change in the proportion of milkweeds, and thus monarch activity, in agricultural fields and non-agricultural habitats to show why using counts can produce misleading conclusions about population size. We then separate out the shifting proportion effect from the counts to estimate the population size and show that these corrected summer monarch counts show a decline over time and are correlated with the size of the overwintering population. In addition, we present evidence against the hypothesis of increased mortality during migration. The milkweed limitation hypothesis for monarch decline remains supported and conservation efforts focusing on adding milkweeds to the landscape in the summer breeding region have a sound scientific basis.

  1. SHORT GMC LIFETIMES: AN OBSERVATIONAL ESTIMATE WITH THE PdBI ARCSECOND WHIRLPOOL SURVEY (PAWS)

    Energy Technology Data Exchange (ETDEWEB)

    Meidt, Sharon E.; Hughes, Annie; Schinnerer, Eva; Colombo, Dario; Querejeta, Miguel [Max-Planck-Institut für Astronomie / Königstuhl 17 D-69117 Heidelberg (Germany); Dobbs, Clare L. [School of Physics and Astronomy, University of Exeter, Stocker Road, Exeter EX4 4QL (United Kingdom); Pety, Jérôme [Institut de Radioastronomie Millimétrique, 300 Rue de la Piscine, F-38406 Saint Martin d’Hères (France); Thompson, Todd A. [Department of Astronomy, The Ohio State University, 140 W. 18th Ave., Columbus, OH 43210 (United States); García-Burillo, Santiago [Observatorio Astronómico Nacional—OAN, Observatorio de Madrid Alfonso XII, 3, E-28014 Madrid (Spain); Leroy, Adam K. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Kramer, Carsten [Instituto Radioastronomía Milimétrica, Av. Divina Pastora 7, Nucleo Central, E-18012 Granada (Spain); Schuster, Karl F.; Dumas, Gaëlle [Observatoire de Paris, 61 Avenue de l’Observatoire, F-75014 Paris (France)

    2015-06-10

    We describe and execute a novel approach to observationally estimate the lifetimes of giant molecular clouds (GMCs). We focus on the cloud population between the two main spiral arms in M51 (the inter-arm region) where cloud destruction via shear and star formation feedback dominates over formation processes. By monitoring the change in GMC number densities and properties across the inter-arm, we estimate the lifetime as a fraction of the inter-arm travel time. We find that GMC lifetimes in M51's inter-arm are finite and short, 20–30 Myr. Over most of the region under investigation shear appears to regulate the lifetime. As the shear timescale increases with galactocentric radius, we expect cloud destruction to switch primarily to feedback at larger radii. We identify a transition from shear- to feedback-dominated disruption, finding that shear is more efficient at dispersing clouds, whereas feedback transforms the population, e.g., by fragmenting high-mass clouds into lower mass pieces. Compared to the characteristic timescale for molecular hydrogen in M51, our short lifetimes suggest that gas can remain molecular while clouds disperse and reassemble. We propose that galaxy dynamics regulates the cycling of molecular material from diffuse to bound (and ultimately star-forming) objects, contributing to long observed molecular depletion times in normal disk galaxies. We also speculate that, in extreme environments like elliptical galaxies and concentrated galaxy centers, star formation can be suppressed when the shear timescale is short enough that some clouds will not survive to form stars.

  2. Detailed rock failure susceptibility mapping in steep rocky coasts by means of non-contact geostructural surveys: the case study of the Tigullio Gulf (Eastern Liguria, Northern Italy

    Directory of Open Access Journals (Sweden)

    P. De Vita

    2012-04-01

    Full Text Available In this study, an engineering geological analysis for the assessment of the rock failure susceptibility of a high, steep, rocky coast was developed by means of non-contact geostructural surveys. The methodology was applied to a 6-km coastal cliff located in the Gulf of Tigullio (Northern Tyrrhenian Sea between Rapallo and Chiavari.

    The method is based on the geostructural characterisation of outcropping rock masses through meso- and macroscale stereoscopic analyses of digital photos that were taken continuously from a known distance from the coastline. The results of the method were verified through direct surveys of accessible sample areas. The rock failure susceptibility of the coastal sector was assessed by analysing the fundamental rock slope mechanisms of instability and the results were implemented into a Geographic Information System (GIS.

    The proposed method is useful for rock failure susceptibility assessments in high, steep, rocky coastal areas, where accessibility is limited due to cliffs or steep slopes. Moreover, the method can be applied to private properties or any other area where a complete and systematic analysis of rock mass structural features cannot be achieved.

    Compared to direct surveys and to other non-contact methods based on digital terrestrial photogrammetry, the proposed procedure provided good quality data of the structural features of the rock mass at a low cost. Therefore, the method could be applied to similar coastal areas with a high risk of rock failure occurrence.

  3. A Detailed Gamma-ray Survey for Estimating the Radiogenic Power of Sardinian Variscan Crust

    International Nuclear Information System (INIS)

    Xhixha, M.; Baldoncini, M.; Bezzon, G.P.; Buso, G.P.; Carmignani, L.; Casini, L.; Callegari, I.; Colonna, T.; Cuccuru, S.; Guastaldi, E.; Fiorentini, G.; Mantovani, F.; Massa, G.; Mou, L.; Oggiano, G.; Puccini, A.; Rossi Alvarez, C.; Strati, V.; Xhixha, G.; Zanon, A.

    2014-01-01

    The N-E Sardinia batholith is part of the European Variscan belt which is generally considered an example for hot collisional orogens. After a period of crustal thickening characterized by lower gradients, during Late Carboniferous and Early Permian times, higher geothermal gradients were diffusively established. The sources which contributed to the thermal budget of late Variscan high-temperature events are still debated. One of the hypothesis(1) considers an extra contribution by radioactive heating of felsic crust tectonically emplaced at the bottom of a Palaeozoic orogenic root. It is apparent that a detailed characterization of heat-producing elements (K, U and Th) of Sardinian Variscan crust are needed by the Earth Science community. This study focus on this goal reporting the results of an extensive survey on the base of gamma-ray measurements performed in the laboratory and in situ. The K, U and Th abundances obtained for the main lithotypes of Sardinia batholiths will be used as input for modeling the geodynamic and thermal evolution of the South Variscan Belt

  4. A PARAMETERIZED GALAXY CATALOG SIMULATOR FOR TESTING CLUSTER FINDING, MASS ESTIMATION, AND PHOTOMETRIC REDSHIFT ESTIMATION IN OPTICAL AND NEAR-INFRARED SURVEYS

    International Nuclear Information System (INIS)

    Song, Jeeseon; Mohr, Joseph J.; Barkhouse, Wayne A.; Rude, Cody; Warren, Michael S.; Dolag, Klaus

    2012-01-01

    We present a galaxy catalog simulator that converts N-body simulations with halo and subhalo catalogs into mock, multiband photometric catalogs. The simulator assigns galaxy properties to each subhalo in a way that reproduces the observed cluster galaxy halo occupation distribution, the radial and mass-dependent variation in fractions of blue galaxies, the luminosity functions in the cluster and the field, and the color-magnitude relation in clusters. Moreover, the evolution of these parameters is tuned to match existing observational constraints. Parameterizing an ensemble of cluster galaxy properties enables us to create mock catalogs with variations in those properties, which in turn allows us to quantify the sensitivity of cluster finding to current observational uncertainties in these properties. Field galaxies are sampled from existing multiband photometric surveys of similar depth. We present an application of the catalog simulator to characterize the selection function and contamination of a galaxy cluster finder that utilizes the cluster red sequence together with galaxy clustering on the sky. We estimate systematic uncertainties in the selection to be at the ≤15% level with current observational constraints on cluster galaxy populations and their evolution. We find the contamination in this cluster finder to be ∼35% to redshift z ∼ 0.6. In addition, we use the mock galaxy catalogs to test the optical mass indicator B gc and a red-sequence redshift estimator. We measure the intrinsic scatter of the B gc -mass relation to be approximately log normal with σ log10M ∼0.25 and we demonstrate photometric redshift accuracies for massive clusters at the ∼3% level out to z ∼ 0.7.

  5. A Solution to Modeling Multilevel Confirmatory Factor Analysis with Data Obtained from Complex Survey Sampling to Avoid Conflated Parameter Estimates

    Directory of Open Access Journals (Sweden)

    Jiun-Yu Wu

    2017-09-01

    Full Text Available The issue of equality in the between-and within-level structures in Multilevel Confirmatory Factor Analysis (MCFA models has been influential for obtaining unbiased parameter estimates and statistical inferences. A commonly seen condition is the inequality of factor loadings under equal level-varying structures. With mathematical investigation and Monte Carlo simulation, this study compared the robustness of five statistical models including two model-based (a true and a mis-specified models, one design-based, and two maximum models (two models where the full rank of variance-covariance matrix is estimated in between level and within level, respectively in analyzing complex survey measurement data with level-varying factor loadings. The empirical data of 120 3rd graders' (from 40 classrooms perceived Harter competence scale were modeled using MCFA and the parameter estimates were used as true parameters to perform the Monte Carlo simulation study. Results showed maximum models was robust to unequal factor loadings while the design-based and the miss-specified model-based approaches produced conflated results and spurious statistical inferences. We recommend the use of maximum models if researchers have limited information about the pattern of factor loadings and measurement structures. Measurement models are key components of Structural Equation Modeling (SEM; therefore, the findings can be generalized to multilevel SEM and CFA models. Mplus codes are provided for maximum models and other analytical models.

  6. The Single Cigarette Economy in India--a Back of the Envelope Survey to Estimate its Magnitude.

    Science.gov (United States)

    Lal, Pranay; Kumar, Ravinder; Ray, Shreelekha; Sharma, Narinder; Bhattarcharya, Bhaktimay; Mishra, Deepak; Sinha, Mukesh K; Christian, Anant; Rathinam, Arul; Singh, Gurbinder

    2015-01-01

    Sale of single cigarettes is an important factor for early experimentation, initiation and persistence of tobacco use and a vital factor in the smoking epidemic in India as it is globally. Single cigarettes also promote the sale of illicit cigarettes and neutralises the effect of pack warnings and effective taxation, making tobacco more accessible and affordable to minors. This is the first study to our knowledge which estimates the size of the single stick market in India. In February 2014, a 10 jurisdiction survey was conducted across India to estimate the sale of cigarettes in packs and sticks, by brands and price over a full business day. We estimate that nearly 75% of all cigarettes are sold as single sticks annually, which translates to nearly half a billion US dollars or 30 percent of the India's excise revenues from all cigarettes. This is the price which the consumers pay but is not captured through tax and therefore pervades into an informal economy. Tracking the retail price of single cigarettes is an efficient way to determine the willingness to pay by cigarette smokers and is a possible method to determine the tax rates in the absence of any other rationale.

  7. Regression models of discharge and mean velocity associated with near-median streamflow conditions in Texas: utility of the U.S. Geological Survey discharge measurement database

    Science.gov (United States)

    Asquith, William H.

    2014-01-01

    A database containing more than 16,300 discharge values and ancillary hydraulic attributes was assembled from summaries of discharge measurement records for 391 USGS streamflow-gauging stations (streamgauges) in Texas. Each discharge is between the 40th- and 60th-percentile daily mean streamflow as determined by period-of-record, streamgauge-specific, flow-duration curves. Each discharge therefore is assumed to represent a discharge measurement made for near-median streamflow conditions, and such conditions are conceptualized as representative of midrange to baseflow conditions in much of the state. The hydraulic attributes of each discharge measurement included concomitant cross-section flow area, water-surface top width, and reported mean velocity. Two regression equations are presented: (1) an expression for discharge and (2) an expression for mean velocity, both as functions of selected hydraulic attributes and watershed characteristics. Specifically, the discharge equation uses cross-sectional area, water-surface top width, contributing drainage area of the watershed, and mean annual precipitation of the location; the equation has an adjusted R-squared of approximately 0.95 and residual standard error of approximately 0.23 base-10 logarithm (cubic meters per second). The mean velocity equation uses discharge, water-surface top width, contributing drainage area, and mean annual precipitation; the equation has an adjusted R-squared of approximately 0.50 and residual standard error of approximately 0.087 third root (meters per second). Residual plots from both equations indicate that reliable estimates of discharge and mean velocity at ungauged stream sites are possible. Further, the relation between contributing drainage area and main-channel slope (a measure of whole-watershed slope) is depicted to aid analyst judgment of equation applicability for ungauged sites. Example applications and computations are provided and discussed within a real-world, discharge

  8. Diagnosis, prevalence estimation and burden measurement in population surveys of headache: presenting the HARDSHIP questionnaire.

    Science.gov (United States)

    Steiner, Timothy J; Gururaj, Gopalakrishna; Andrée, Colette; Katsarava, Zaza; Ayzenberg, Ilya; Yu, Sheng-Yuan; Al Jumah, Mohammed; Tekle-Haimanot, Redda; Birbeck, Gretchen L; Herekar, Arif; Linde, Mattias; Mbewe, Edouard; Manandhar, Kedar; Risal, Ajay; Jensen, Rigmor; Queiroz, Luiz Paulo; Scher, Ann I; Wang, Shuu-Jiun; Stovner, Lars Jacob

    2014-01-08

    The global burden of headache is very large, but knowledge of it is far from complete and needs still to be gathered. Published population-based studies have used variable methodology, which has influenced findings and made comparisons difficult. The Global Campaign against Headache is undertaking initiatives to improve and standardize methods in use for cross-sectional studies. One requirement is for a survey instrument with proven cross-cultural validity. This report describes the development of such an instrument. Two of the authors developed the initial version, which was used with adaptations in population-based studies in China, Ethiopia, India, Nepal, Pakistan, Russia, Saudi Arabia, Zambia and 10 countries in the European Union. The resultant evolution of this instrument was reviewed by an expert consensus group drawn from all world regions. The final output was the Headache-Attributed Restriction, Disability, Social Handicap and Impaired Participation (HARDSHIP) questionnaire, designed for application by trained lay interviewers. HARDSHIP is a modular instrument incorporating demographic enquiry, diagnostic questions based on ICHD-3 beta criteria, and enquiries into each of the following as components of headache-attributed burden: symptom burden; health-care utilization; disability and productive time losses; impact on education, career and earnings; perception of control; interictal burden; overall individual burden; effects on relationships and family dynamics; effects on others, including household partner and children; quality of life; wellbeing; obesity as a comorbidity. HARDSHIP already has demonstrated validity and acceptability in multiple languages and cultures. Modules may be included or not, and others (e.g., on additional comorbidities) added, according to the purpose of the study and resources (especially time) available.

  9. A statistical model for estimation of fish density including correlation in size, space, time and between species from research survey data

    DEFF Research Database (Denmark)

    Nielsen, J. Rasmus; Kristensen, Kasper; Lewy, Peter

    2014-01-01

    Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes...

  10. Using recall surveys to estimate harvest of cod, eel and sea migrating brown trout in Danish angling and recreational passive gear fishing

    DEFF Research Database (Denmark)

    Sparrevohn, Claus Reedtz; Nielsen, Jan; Storr-Paulsen, Marie

    , as all recreational fishermen have to purchase a personal non-transferable and time limited national license before fishing. However, this list will not include those fishing illegally without a license. Therefore, two types of recall surveys with their own questionnaires and group of respondents were...... carried out. The first survey - the license list survey – was carried out once in 2009 and twice in 2010. This survey had a sampling frame corresponding to the list of persons that had purchased a license within the last 12 months. Respondents were asked to provide detailed information on catch and effort...... per ICES area and quarter. In order to also estimate the fraction of fishermen that fished without a valid license, a second survey, called – the Omnibus survey-, was carried out four times. This survey targeted the entire Danish population between 16 and 74 of age...

  11. Genetic algorithm-based optimization of testing and maintenance under uncertain unavailability and cost estimation: A survey of strategies for harmonizing evolution and accuracy

    International Nuclear Information System (INIS)

    Villanueva, J.F.; Sanchez, A.I.; Carlos, S.; Martorell, S.

    2008-01-01

    This paper presents the results of a survey to show the applicability of an approach based on a combination of distribution-free tolerance interval and genetic algorithms for testing and maintenance optimization of safety-related systems based on unavailability and cost estimation acting as uncertain decision criteria. Several strategies have been checked using a combination of Monte Carlo (simulation)--genetic algorithm (search-evolution). Tolerance intervals for the unavailability and cost estimation are obtained to be used by the genetic algorithms. Both single- and multiple-objective genetic algorithms are used. In general, it is shown that the approach is a robust, fast and powerful tool that performs very favorably in the face of noise in the output (i.e. uncertainty) and it is able to find the optimum over a complicated, high-dimensional nonlinear space in a tiny fraction of the time required for enumeration of the decision space. This approach reduces the computational effort by means of providing appropriate balance between accuracy of simulation and evolution; however, negative effects are also shown when a not well-balanced accuracy-evolution couple is used, which can be avoided or mitigated with the use of a single-objective genetic algorithm or the use of a multiple-objective genetic algorithm with additional statistical information

  12. A comparison of prevalence estimates for selected health indicators and chronic diseases or conditions from the Behavioral Risk Factor Surveillance System, the National Health Interview Survey, and the National Health and Nutrition Examination Survey, 2007-2008.

    Science.gov (United States)

    Li, Chaoyang; Balluz, Lina S; Ford, Earl S; Okoro, Catherine A; Zhao, Guixiang; Pierannunzi, Carol

    2012-06-01

    To compare the prevalence estimates of selected health indicators and chronic diseases or conditions among three national health surveys in the United States. Data from adults aged 18 years or older who participated in the Behavioral Risk Factor Surveillance System (BRFSS) in 2007 and 2008 (n=807,524), the National Health Interview Survey (NHIS) in 2007 and 2008 (n=44,262), and the National Health and Nutrition Examination Survey (NHANES) during 2007 and 2008 (n=5871) were analyzed. The prevalence estimates of current smoking, obesity, hypertension, and no health insurance were similar across the three surveys, with absolute differences ranging from 0.7% to 3.9% (relative differences: 2.3% to 20.2%). The prevalence estimate of poor or fair health from BRFSS was similar to that from NHANES, but higher than that from NHIS. The prevalence estimates of diabetes, coronary heart disease, and stroke were similar across the three surveys, with absolute differences ranging from 0.0% to 0.8% (relative differences: 0.2% to 17.1%). While the BRFSS continues to provide invaluable health information at state and local level, it is reassuring to observe consistency in the prevalence estimates of key health indicators of similar caliber between BRFSS and other national surveys. Published by Elsevier Inc.

  13. Human health exposure factor estimates based upon a creel/angler survey of the lower Passaic River (part 3).

    Science.gov (United States)

    Ray, Rose; Craven, Valerie; Bingham, Matthew; Kinnell, Jason; Hastings, Elizabeth; Finley, Brent

    2007-03-15

    The results of an analysis of site-specific creel and angler information collected for the lower 6 miles of the Passaic River in Newark, NJ (Study Area), demonstrate that performing a site-specific creel/angler survey was essential to capture the unique characteristics of the anglers using the Study Area. The results presented were developed using a unique methodology for calculating site-specific, human exposure estimates from data collected in this unique urban/industrial setting. The site-specific human exposure factors calculated and presented include (1) size of angler population and fish-consuming population, (2) annual fish consumption rate, (3) duration of anglers' fishing careers, (4) cooking methods for the fish consumed, and (5) demographic information. Sensitivity and validation analyses were performed, and results were found to be useful for performing a site-specific, human health risk assessment. It was also concluded that site-specific exposure factor values are preferable to less representative "default values." The results of the analysis showed that the size of the angling population at the Study Area is estimated to range from 154 to 385 anglers, based on different methods of matching intercepts with anglers. Thirty-four anglers were estimated to have consumed fish; 37 people consumed fish from the river. The fish consumption rate for anglers using this area was best represented as 0.42 g/day for the central tendency and 1.8 g/day for the 95th percentile estimates. Anglers fishing at the river have relatively short fishing careers with a median of 0.9 yr, an average of 1.5 yr, and a 95th percentile of 4.8 yr. Consuming anglers tend to fry the fish they caught. The demographics of anglers who consume fish do not appear to differ substantially from those who do not, with no indication of a subsistence angling population.

  14. Surveys of environmental DNA (eDNA): a new approach to estimate occurrence in Vulnerable manatee populations

    Science.gov (United States)

    Hunter, Margaret; Meigs-Friend, Gaia; Ferrante, Jason; Takoukam Kamla, Aristide; Dorazio, Robert; Keith Diagne, Lucy; Luna, Fabia; Lanyon, Janet M.; Reid, James P.

    2018-01-01

    Environmental DNA (eDNA) detection is a technique used to non-invasively detect cryptic, low density, or logistically difficult-to-study species, such as imperiled manatees. For eDNA measurement, genetic material shed into the environment is concentrated from water samples and analyzed for the presence of target species. Cytochrome bquantitative PCR and droplet digital PCR eDNA assays were developed for the 3 Vulnerable manatee species: African, Amazonian, and both subspecies of the West Indian (Florida and Antillean) manatee. Environmental DNA assays can help to delineate manatee habitat ranges, high use areas, and seasonal population changes. To validate the assay, water was analyzed from Florida’s east coast containing a high-density manatee population and produced 31564 DNA molecules l-1on average and high occurrence (ψ) and detection (p) estimates (ψ = 0.84 [0.40-0.99]; p = 0.99 [0.95-1.00]; limit of detection 3 copies µl-1). Similar occupancy estimates were produced in the Florida Panhandle (ψ = 0.79 [0.54-0.97]) and Cuba (ψ = 0.89 [0.54-1.00]), while occupancy estimates in Cameroon were lower (ψ = 0.49 [0.09-0.95]). The eDNA-derived detection estimates were higher than those generated using aerial survey data on the west coast of Florida and may be effective for population monitoring. Subsequent eDNA studies could be particularly useful in locations where manatees are (1) difficult to identify visually (e.g. the Amazon River and Africa), (2) are present in patchy distributions or are on the verge of extinction (e.g. Jamaica, Haiti), and (3) where repatriation efforts are proposed (e.g. Brazil, Guadeloupe). Extension of these eDNA techniques could be applied to other imperiled marine mammal populations such as African and Asian dugongs.

  15. Effect of payments for health care on poverty estimates in 11 countries in Asia: an analysis of household survey data.

    Science.gov (United States)

    van Doorslaer, Eddy; O'Donnell, Owen; Rannan-Eliya, Ravi P; Somanathan, Aparnaa; Adhikari, Shiva Raj; Garg, Charu C; Harbianto, Deni; Herrin, Alejandro N; Huq, Mohammed Nazmul; Ibragimova, Shamsia; Karan, Anup; Ng, Chiu Wan; Pande, Badri Raj; Racelis, Rachel; Tao, Sihai; Tin, Keith; Tisayaticom, Kanjana; Trisnantoro, Laksono; Vasavid, Chitpranee; Zhao, Yuxin

    2006-10-14

    Conventional estimates of poverty do not take account of out-of-pocket payments to finance health care. We aimed to reassess measures of poverty in 11 low-to-middle income countries in Asia by calculating total household resources both with and without out-of-pocket payments for health care. We obtained data on payments for health care from nationally representative surveys, and subtracted these payments from total household resources. We then calculated the number of individuals with less than the internationally accepted threshold of absolute poverty (US1 dollar per head per day) after making health payments. We also assessed the effect of health-care payments on the poverty gap--the amount by which household resources fell short of the 1 dollar poverty line in these countries. Our estimate of the overall prevalence of absolute poverty in these countries was 14% higher than conventional estimates that do not take account of out-of-pocket payments for health care. We calculated that an additional 2.7% of the population under study (78 million people) ended up with less than 1 dollar per day after they had paid for health care. In Bangladesh, China, India, Nepal, and Vietnam, where more than 60% of health-care costs are paid out-of-pocket by households, our estimates of poverty were much higher than conventional figures, ranging from an additional 1.2% of the population in Vietnam to 3.8% in Bangladesh. Out-of-pocket health payments exacerbate poverty. Policies to reduce the number of Asians living on less than 1 dollar per day need to include measures to reduce such payments.

  16. Hypotensive anesthesia: Comparing the effects of different drug combinations on mean arterial pressure, estimated blood loss, and surgery time in orthognathic surgery.

    Science.gov (United States)

    Jeong, James; Portnof, Jason E; Kalayeh, Mona; Hardigan, Patrick

    2016-07-01

    Sevoflurane, an inhalational hypotensive anesthetic agent with a vasodilatory property, has been commonly used as a single agent to induce hypotension and effectively decrease blood loss in orthognathic surgery. However, it is common for patients to receive other hypotensive anesthetic agents in combination with sevoflurane. The purpose of our retrospective cohort study is to investigate whether administering an additional hypotensive agent has greater effect at reducing mean arterial pressure (MAP), estimated blood loss (EBL) and surgery time during orthognathic surgery. 57 subjects, aged 0-89 of both genders, who underwent orthognathic surgery were investigated in this study. Each patient's anesthesia records were reviewed to record the following variables of interest: EBL, duration of surgery, and MAP reduction in %. 41 subjects were placed in Group I and they received sevoflurane alone. 16 subjects were placed in Group II and they received sevoflurane plus a "supportive" agent. These "supportive" agents were esmolol, labetalol, metoprolol, nicardipine, and dexmedetomidine. The significant differences between two groups were assessed by using ANCOVA and p surgery time. Subjects in Group II experienced a greater reduction in MAP during surgery than subjects in Group I, 27.30% and 20.44%, respectively (p = 0.027). There was no significant difference for sex (p = 0.417) or age group (p = 0.113) in estimated blood loss, however. The mean surgery time in Group I was 1.93, 2.77, and 4.54 h with respect to LeFort, BSSO/IVRO, and double jaw surgery. Patients in Group II had a mean surgery time of 1.73, 2.07, and 5.64 h with respect to LeFort, BSSO/IVRO, and double jaw surgery. No statistically significant difference was demonstrated in surgery time between Group I vs. Group II (p > 0.05). Subjects in Group II experienced, on average, more blood loss than subjects in Group I, 355.50 ml and 238.90 ml, respectively. The use of multi-drug combination may offer

  17. Program for shaping neutron microconstants for calculations by means of the Monte-Carlo method on the base of estimated data files (NEDAM)

    International Nuclear Information System (INIS)

    Zakharov, L.N.; Markovskij, D.V.; Frank-Kamenetskij, A.D.; Shatalov, G.E.

    1978-01-01

    The program for shaping neutron microconstants for calculations by means of the Monte-Carlo method, oriented on the detailed consideration of processes in the quick region. The initial information is files of the estimated datea within the UKNDL formate. The method combines the group approach to representation of the process probability and anisotropy of the elastic scattering with the individual description of the secondary neutron spectra of non-elastic processes. The NEDAM program is written in the FORTRAN language for BESM-6 computer and has the following characteristics: the initial file length of the evaluated data is 20000 words, the multigroup constant file length equals 8000 words, the MARK massive length equals 1000 words. The calculation time of a single variant equals 1-2 min

  18. An approach to the estimative the DBO and the DQO of residual waters by means of the measure of the total organic carbon

    International Nuclear Information System (INIS)

    Munoz, Horacio; Mejia, Gloria; Chaverra, Marlene; Vasquez, Esmeralda

    2000-01-01

    Using parameters like BOD and COD has normally done the measurement of the contents of biodegradable organic matter present in both water and wastewaters. Since the time required to obtain the first of them is too long for practical purposes plus the fact that precision achieved in analysis results is not too high for both, some troubles are encountered when evaluating pollution loads for different objectives (applying water quality criteria to discharge effluents or designing wastewater treatment facilities) this paper presents the results of a method of estimation of both parameters through the use of the total organic carbon contents of waste waters. Its detection is quick and accurate and could mean the access to a tool with many possibilities in water pollution assessment and waste water treatment control

  19. Mean precipitation estimation, rain gauge network evaluation and quantification of the hydrologic balance in the River Quito basin in Choco, state of Colombia

    International Nuclear Information System (INIS)

    Cordoba, Samir; Zea, Jorge A; Murillo, W

    2006-01-01

    In this work the calculation of the average precipitation in the Quito River basin, state of Choco, Colombia, is presents through diverse techniques, among which are those suggested by Thiessen and those based on the isohyets analysis, in order to select the one appropriate to quantification of rainwater available to the basin. Also included is an estimation of the error with which the average precipitation in the zone studied is fraught when measured, by means of the methodology proposed by Gandin (1970) and Kagan (WMO, 1966), which at the same time allows to evaluate the representativeness of each one of the stations that make up the rain gauge network in the area. The study concludes with a calculation of the hydrologic balance for the Quito river basin based on the pilot procedure suggested in the UNESCO publication on the study of the South America hydrologic balance, from which the great contribution of rainfall to a greatly enhanced run-off may be appreciated

  20. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina [Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Applied Mathematics and Computational Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2012-08-15

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') and vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely

  1. THE DETECTION RATE OF EARLY UV EMISSION FROM SUPERNOVAE: A DEDICATED GALEX/PTF SURVEY AND CALIBRATED THEORETICAL ESTIMATES

    Energy Technology Data Exchange (ETDEWEB)

    Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran O.; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer [Department of Particle Physics and Astrophysics, Faculty of Physics, The Weizmann Institute of Science, Rehovot 76100 (Israel); Kulkarni, Shrinivas R.; Kasliwal, Mansi M. [Cahill Center for Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Ben-Ami, Sagi [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Ctr. for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Chelouche, Doron; Rafter, Stephen [Physics Department, Faculty of Natural Sciences, University of Haifa, 31905 Haifa (Israel); Behar, Ehud; Laor, Ari [Physics Department, Technion Israel Institute of Technology, 32000 Haifa (Israel); Poznanski, Dovi; Nakar, Ehud; Maoz, Dan [School of Physics and Astronomy, Tel Aviv University, 69978 Tel Aviv (Israel); Trakhtenbrot, Benny [Institute for Astronomy, ETH Zurich, Wolfgang-Pauli-Strasse 27 Zurich 8093 (Switzerland); Neill, James D.; Barlow, Thomas A.; Martin, Christofer D., E-mail: noam.ganot@gmail.com [California Institute of Technology, 1200 East California Boulevard, MC 278-17, Pasadena, CA 91125 (United States); Collaboration: ULTRASAT Science Team; WTTH consortium; GALEX Science Team; Palomar Transient Factory; and others

    2016-03-20

    The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R{sub ⊙}, explosion energies of 10{sup 51} erg, and ejecta masses of 10 M{sub ⊙}. Exploding blue supergiants and Wolf–Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (∼0.5 SN per deg{sup 2}), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.

  2. Challenges in Estimating Vaccine Coverage in Refugee and Displaced Populations: Results From Household Surveys in Jordan and Lebanon

    Science.gov (United States)

    Roberton, Timothy; Weiss, William; Doocy, Shannon

    2017-01-01

    Ensuring the sustained immunization of displaced persons is a key objective in humanitarian emergencies. Typically, humanitarian actors measure coverage of single vaccines following an immunization campaign; few measure routine coverage of all vaccines. We undertook household surveys of Syrian refugees in Jordan and Lebanon, outside of camps, using a mix of random and respondent-driven sampling, to measure coverage of all vaccinations included in the host country’s vaccine schedule. We analyzed the results with a critical eye to data limitations and implications for similar studies. Among households with a child aged 12–23 months, 55.1% of respondents in Jordan and 46.6% in Lebanon were able to produce the child’s EPI card. Only 24.5% of Syrian refugee children in Jordan and 12.5% in Lebanon were fully immunized through routine vaccination services (having received from non-campaign sources: measles, polio 1–3, and DPT 1–3 in Jordan and Lebanon, and BCG in Jordan). Respondents in Jordan (33.5%) and Lebanon (40.1%) reported difficulties obtaining child vaccinations. Our estimated immunization rates were lower than expected and raise serious concerns about gaps in vaccine coverage among Syrian refugees. Although our estimates likely under-represent true coverage, given the additional benefit of campaigns (not captured in our surveys), there is a clear need to increase awareness, accessibility, and uptake of immunization services. Current methods to measure vaccine coverage in refugee and displaced populations have limitations. To better understand health needs in such groups, we need research on: validity of recall methods, links between campaigns and routine immunization programs, and improved sampling of hard-to-reach populations. PMID:28805672

  3. Using Structured Additive Regression Models to Estimate Risk Factors of Malaria: Analysis of 2010 Malawi Malaria Indicator Survey Data

    Science.gov (United States)

    Chirombo, James; Lowe, Rachel; Kazembe, Lawrence

    2014-01-01

    Background After years of implementing Roll Back Malaria (RBM) interventions, the changing landscape of malaria in terms of risk factors and spatial pattern has not been fully investigated. This paper uses the 2010 malaria indicator survey data to investigate if known malaria risk factors remain relevant after many years of interventions. Methods We adopted a structured additive logistic regression model that allowed for spatial correlation, to more realistically estimate malaria risk factors. Our model included child and household level covariates, as well as climatic and environmental factors. Continuous variables were modelled by assuming second order random walk priors, while spatial correlation was specified as a Markov random field prior, with fixed effects assigned diffuse priors. Inference was fully Bayesian resulting in an under five malaria risk map for Malawi. Results Malaria risk increased with increasing age of the child. With respect to socio-economic factors, the greater the household wealth, the lower the malaria prevalence. A general decline in malaria risk was observed as altitude increased. Minimum temperatures and average total rainfall in the three months preceding the survey did not show a strong association with disease risk. Conclusions The structured additive regression model offered a flexible extension to standard regression models by enabling simultaneous modelling of possible nonlinear effects of continuous covariates, spatial correlation and heterogeneity, while estimating usual fixed effects of categorical and continuous observed variables. Our results confirmed that malaria epidemiology is a complex interaction of biotic and abiotic factors, both at the individual, household and community level and that risk factors are still relevant many years after extensive implementation of RBM activities. PMID:24991915

  4. A synthesis of convenience survey and other data to estimate undiagnosed HIV infection among men who have sex with men in England and Wales.

    Science.gov (United States)

    Walker, Kate; Seaman, Shaun R; De Angelis, Daniela; Presanis, Anne M; Dodds, Julie P; Johnson, Anne M; Mercey, Danielle; Gill, O Noel; Copas, Andrew J

    2011-10-01

    Hard-to-reach population subgroups are typically investigated using convenience sampling, which may give biased estimates. Combining information from such surveys, a probability survey and clinic surveillance, can potentially minimize the bias. We developed a methodology to estimate the prevalence of undiagnosed HIV infection among men who have sex with men (MSM) in England and Wales aged 16-44 years in 2003, making fuller use of the available data than earlier work. We performed a synthesis of three data sources: genitourinary medicine clinic surveillance (11 380 tests), a venue-based convenience survey including anonymous HIV testing (3702 MSM) and a general population sexual behaviour survey (134 MSM). A logistic regression model to predict undiagnosed infection was fitted to the convenience survey data and then applied to the MSMs in the population survey to estimate the prevalence of undiagnosed infection in the general MSM population. This estimate was corrected for selection biases in the convenience survey using clinic surveillance data. A sensitivity analysis addressed uncertainty in our assumptions. The estimated prevalence of undiagnosed HIV in MSM was 2.4% [95% confidence interval (95% CI 1.7-3.0%)], and between 1.6% (95% CI 1.1-2.0%) and 3.3% (95% CI 2.4-4.1%) depending on assumptions; corresponding to 5500 (3390-7180), 3610 (2180-4740) and 7570 (4790-9840) men, and undiagnosed fractions of 33, 24 and 40%, respectively. Our estimates are consistent with earlier work that did not make full use of data sources. Reconciling data from multiple sources, including probability-, clinic- and venue-based convenience samples can reduce bias in estimates. This methodology could be applied in other settings to take full advantage of multiple imperfect data sources.

  5. Estimation of the radiation strength, dose equivalent and mean gamma-ray energy form p+ sup 2 sup 3 sup 8 U fission products

    CERN Document Server

    Kawakami, H

    2003-01-01

    On 100 isobars from 72 to 171 mass number, the radiation strength, dose equivalent and mean gamma-ray energy from p+ sup 2 sup 3 sup 8 U fission products at Tandem accelerator facility were estimated on the basis of data of proton induced fission mass yield by T. Tsukada. In order to control radiation, the decay curves of radiation of each mass after irradiation were estimated and illustrated. These calculation results showed 1) the peak of p+ sup 2 sup 3 sup 8 U fission products is 101 and 133 mass number. 2) gamma-ray strength of target ion source immediately after irradiation is 3.12x10 sup 1 sup 1 (Radiation/s) when it repeated 4 cycles of UC sub 2 (2.6 g/cm sup 2) target radiated by 30 MeV and 3 mu A proton for 5 days and then cooled for 2 days. It decreased to 3.85x10 sup 1 sup 0 and 6.7x10 sup 9 (Radiation/s) after one day and two weeks cooling, respectively. 3) Total dose equivalent is 3.8x10 sup 4 (mu S/h) at 1 m distance without shield. 4) There are no problems on control the following isobars, beca...

  6. Assessment of Current Global and Regional Mean Sea Level Estimates Based on the TOPEX/Poseidon Jason-1 and 2 Climate Data Record

    Science.gov (United States)

    Beckley, B. D.; Lemoine, F. G.; Zelensky, N. P.; Yang, X.; Holmes, S.; Ray, R. D.; Mitchum, G. T.; Desai, S.; Brown, S.; Haines, B.

    2011-01-01

    Recent developments in Precise Orbit Determinations (POD) due to in particular to revisions to the terrestrial reference frame realization and the time variable gravity (TVG) continues to provide improvements to the accuracy and stability of the PO directly affecting mean sea level (MSL) estimates. Long-term credible MSL estimates require the development and continued maintenance of a stable reference frame, along with vigilant monitoring of the performance of the independent tracking systems used to calculate the orbits for altimeter spacecrafts. The stringent MSL accuracy requirements of a few tenths of an mm/yr are particularly essential for mass budget closure analysis over the relative short time period of Jason-l &2, GRACE, and Argo coincident measurements. In an effort to adhere to cross mission consistency, we have generated a full time series of experimental orbits (GSFC stdlllO) for TOPEX/Poseidon (TP), Jason-I, and OSTM based on an improved terrestrial reference frame (TRF) realization (ITRF2008), revised static (GGM03s), and time variable gravity field (Eigen6s). In this presentation we assess the impact of the revised precision orbits on inter-mission bias estimates, and resultant global and regional MSL trends. Tide gauge verification results are shown to assess the current stability of the Jason-2 sea surface height time series that suggests a possible discontinuity initiated in early 2010. Although the Jason-2 time series is relatively short (approximately 3 years), a thorough review of the entire suite of geophysical and environmental range corrections is warranted and is underway to maintain the fidelity of the record.

  7. DEVELOPMENT OF MATHEMATICAL MEANS FOR ESTIMATION OF ECOLOGICAL AND ECONOMICAL LOSSES FROM POLLUTION OF ATMOSPHERIC AIR IN ZONES OF TECHNOGENIC OBJECTS IMPACT

    Directory of Open Access Journals (Sweden)

    O. POPOV

    2015-10-01

    Full Text Available The article describes the solution of one of the most important problems of rational use of natural resources. Modern mathematical tools for damage estimation, which are caused by atmospheric pollution to natural objects, and also methods for calculation of the cost for their renewal were developed. The solution of problem was divided into 3 stages. At the first stage it was defined basic anthropogenic sources of pollution, was illustrated conceptual behavior of pollutants in the atmosphere emitted by technological stationary point source. Choice of mathematical model that allows to determine the distribution of pollutants concentration in the air in zones of pollution by point stationary sources in the short-term discharges was proved. At the second stage it was developed mathematical tools to determine the level of objects damage, which were in the zone of pollution, depending on the intensity and duration of exposure of technogenic sources. At the third stage it was developed mathematical models to determine the recoverable amount of natural objects depending on their level of damage. Model example of developed means usage was described. Advantages of developed means over existed analogs were noticed.

  8. Chronic disease prevalence from Italian administrative databases in the VALORE project: a validation through comparison of population estimates with general practice databases and national survey

    Science.gov (United States)

    2013-01-01

    Background Administrative databases are widely available and have been extensively used to provide estimates of chronic disease prevalence for the purpose of surveillance of both geographical and temporal trends. There are, however, other sources of data available, such as medical records from primary care and national surveys. In this paper we compare disease prevalence estimates obtained from these three different data sources. Methods Data from general practitioners (GP) and administrative transactions for health services were collected from five Italian regions (Veneto, Emilia Romagna, Tuscany, Marche and Sicily) belonging to all the three macroareas of the country (North, Center, South). Crude prevalence estimates were calculated by data source and region for diabetes, ischaemic heart disease, heart failure and chronic obstructive pulmonary disease (COPD). For diabetes and COPD, prevalence estimates were also obtained from a national health survey. When necessary, estimates were adjusted for completeness of data ascertainment. Results Crude prevalence estimates of diabetes in administrative databases (range: from 4.8% to 7.1%) were lower than corresponding GP (6.2%-8.5%) and survey-based estimates (5.1%-7.5%). Geographical trends were similar in the three sources and estimates based on treatment were the same, while estimates adjusted for completeness of ascertainment (6.1%-8.8%) were slightly higher. For ischaemic heart disease administrative and GP data sources were fairly consistent, with prevalence ranging from 3.7% to 4.7% and from 3.3% to 4.9%, respectively. In the case of heart failure administrative estimates were consistently higher than GPs’ estimates in all five regions, the highest difference being 1.4% vs 1.1%. For COPD the estimates from administrative data, ranging from 3.1% to 5.2%, fell into the confidence interval of the Survey estimates in four regions, but failed to detect the higher prevalence in the most Southern region (4.0% in

  9. Fishery-independent surface abundance and density estimates of swordfish (Xiphias gladius) from aerial surveys in the Central Mediterranean Sea

    Science.gov (United States)

    Lauriano, Giancarlo; Pierantonio, Nino; Kell, Laurence; Cañadas, Ana; Donovan, Gregory; Panigada, Simone

    2017-07-01

    Fishery-independent surface density and abundance estimates for the swordfish were obtained through aerial surveys carried out over a large portion of the Central Mediterranean, implementing distance sampling methodologies. Both design- and model-based abundance and density showed an uneven occurrence of the species throughout the study area, with clusters of higher density occurring near converging fronts, strong thermoclines and/or underwater features. The surface abundance was estimated for the Pelagos Sanctuary for Mediterranean Marine Mammals in the summer of 2009 (n=1152; 95%CI=669.0-1981.0; %CV=27.64), the Sea of Sardinia, the Pelagos Sanctuary and the Central Tyrrhenian Sea for the summer of 2010 (n=3401; 95%CI=2067.0-5596.0; %CV=25.51), and for the Southern Tyrrhenian Sea during the winter months of 2010-2011 (n=1228; 95%CI=578-2605; %CV=38.59). The Mediterranean swordfish stock deserves special attention in light of the heavy fishing pressures. Furthermore, the unreliability of fishery-related data has, to date, hampered our ability to effectively inform long-term conservation in the Mediterranean Region. Considering that the European countries have committed to protect the resources and all the marine-related economic and social dynamics upon which they depend, the information presented here constitute useful data towards the international legal requirements under the Marine Strategy Framework Directory, the Common Fisheries Policy, the Habitats and Species Directive and the Directive on Maritime Spatial Planning, among the others.

  10. Assessment of Current Estimates of Global and Regional Mean Sea Level from the TOPEX/Poseidon, Jason-1, and OSTM 17-Year Record

    Science.gov (United States)

    Beckley, Brian D.; Ray, Richard D.; Lemoine, Frank G.; Zelensky, N. P.; Holmes, S. A.; Desal, Shailen D.; Brown, Shannon; Mitchum, G. T.; Jacob, Samuel; Luthcke, Scott B.

    2010-01-01

    The science value of satellite altimeter observations has grown dramatically over time as enabling models and technologies have increased the value of data acquired on both past and present missions. With the prospect of an observational time series extending into several decades from TOPEX/Poseidon through Jason-1 and the Ocean Surface Topography Mission (OSTM), and further in time with a future set of operational altimeters, researchers are pushing the bounds of current technology and modeling capability in order to monitor global sea level rate at an accuracy of a few tenths of a mm/yr. The measurement of mean sea-level change from satellite altimetry requires an extreme stability of the altimeter measurement system since the signal being measured is at the level of a few mm/yr. This means that the orbit and reference frame within which the altimeter measurements are situated, and the associated altimeter corrections, must be stable and accurate enough to permit a robust MSL estimate. Foremost, orbit quality and consistency are critical to satellite altimeter measurement accuracy. The orbit defines the altimeter reference frame, and orbit error directly affects the altimeter measurement. Orbit error remains a major component in the error budget of all past and present altimeter missions. For example, inconsistencies in the International Terrestrial Reference Frame (ITRF) used to produce the precision orbits at different times cause systematic inconsistencies to appear in the multimission time-frame between TOPEX and Jason-1, and can affect the intermission calibration of these data. In an effort to adhere to cross mission consistency, we have generated the full time series of orbits for TOPEX/Poseidon (TP), Jason-1, and OSTM based on recent improvements in the satellite force models, reference systems, and modeling strategies. The recent release of the entire revised Jason-1 Geophysical Data Records, and recalibration of the microwave radiometer correction also

  11. A Survey On Mean Glandular Dose From Full-Field Digital Mammography Systems, Operate Using Mo/ Mo And W/Rh Target/ Filter Combinations

    International Nuclear Information System (INIS)

    Noriah Jamal; Siti Selina Abdul Hamid; Humairah Samad Cheung; Siti Kamariah Che Mohamed; Ellyda Muhammed Nordin; Radhiana Hassan; Rehir Dahalan

    2013-01-01

    We had conducted a survey on Mean Glandular Dose (MGD) from Full-Field Digital Mammography systems (FFDM) operate using Molybdenum/ Molybdenum (Mo/ Mo) and Tungsten/ Rhodium (W/ Rh) target/ filter combinations. A survey was carried out at two randomly selected mammography centres in Malaysia, namely National Cancer Society and International Islamic University of Malaysia. The first centre operates using a W/ Rh, while the second centre operates using an Mo/ Mo target/ filter combinations. On the basis of recorded information, data on mammographic views, MGD, age and Compressed Breast Thickness (CBT) were recorded for 100 patients, for each mammographic centre respectively. The MGD data were analyzed for variation with age group, with 5 years increment. The MGD data were also analyzed for variation with CBT, with 5 mm increment. We found that for both CC and MLO views, FFDM systems operated using Mo/ Mo and W/ Rh target/ filter combinations present the same trend on MGD. The average MGD decreases as age increases. While average MGD increases with the increasing of CBT. However, FFDM system operates using Mo/ Mo gives higher MGD as compared with FFDM system operates using W/ Rh. (author)

  12. Meaning in life and perceived quality of life in Switzerland: results of a representative survey in the German, French and Italian regions.

    Science.gov (United States)

    Bernard, Mathieu; Braunschweig, Giliane; Fegg, Martin Johannes; Borasio, Gian Domenico

    2015-09-29

    The concept of meaning in life (MIL) has become a central one in recent years, particularly in psycho-oncology and palliative care. The Schedule for Meaning in Life Evaluation (SMILE) has been developed to allow individuals to choose the life areas that they consider to be important for their own MIL. This approach relates to the "World Health Organisation" definition of quality of life (QOL) as an individual's perception of his own position. The aims of this study were (i) to assess MIL in a representative sample of the Swiss population according to the three linguistic regions and (ii) to evaluate whether MIL constitutes a significant determinant of the perceived QOL. A telephone survey of the Swiss population, performed by a professional survey company, was conducted between November and December 2013. The interview included the SMILE, perceived QOL (0-10) and health status (1-5), and various sociodemographic variables. In the SMILE, an index of weighting (IOW, 20-100), an index of satisfaction (IOS, 0-100), and a total SMILE index (IOWS, 0-100) are calculated from the areas mentioned by the participants as providing MIL. Among the 6671 telephonic contacts realized, 1015 (15%) participants completed the survey: 405 French, 400 German and 210 Italian participants. "Family" (80.2%), "occupation/work" (51%), and "social relations" (43.3%) were the most cited MIL-relevant categories. Italian participants listed "health" more frequently than German and French participants (50.4% vs 31.5% and 24.8% respectively, χ(2) = 12.229, p = .002). Age, gender, education, employment, and marital status significantly influenced either the MIL scores or the MIL-relevant categories. Linear regression analyses indicate that 24.3% of the QOL variance (p = .000) is explained by health status (B = .609, IC = .490-.728, p = .000), MIL (B = .034, IC = .028-.041, p = .000) and socioeconomic status (F = 11.01, p = .000). The major finding of our

  13. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    International Nuclear Information System (INIS)

    Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina

    2012-01-01

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., “FOR PROCESSING”) and vendor postprocessed (i.e., “FOR PRESENTATION”), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which

  14. Bias and precision of methods for estimating the difference in restricted mean survival time from an individual patient data meta-analysis

    Directory of Open Access Journals (Sweden)

    Béranger Lueza

    2016-03-01

    Full Text Available Abstract Background The difference in restricted mean survival time ( rmstD t ∗ $$ rmstD\\left({t}^{\\ast}\\right $$ , the area between two survival curves up to time horizon t ∗ $$ {t}^{\\ast } $$ , is often used in cost-effectiveness analyses to estimate the treatment effect in randomized controlled trials. A challenge in individual patient data (IPD meta-analyses is to account for the trial effect. We aimed at comparing different methods to estimate the rmstD t ∗ $$ rmstD\\left({t}^{\\ast}\\right $$ from an IPD meta-analysis. Methods We compared four methods: the area between Kaplan-Meier curves (experimental vs. control arm ignoring the trial effect (Naïve Kaplan-Meier; the area between Peto curves computed at quintiles of event times (Peto-quintile; the weighted average of the areas between either trial-specific Kaplan-Meier curves (Pooled Kaplan-Meier or trial-specific exponential curves (Pooled Exponential. In a simulation study, we varied the between-trial heterogeneity for the baseline hazard and for the treatment effect (possibly correlated, the overall treatment effect, the time horizon t ∗ $$ {t}^{\\ast } $$ , the number of trials and of patients, the use of fixed or DerSimonian-Laird random effects model, and the proportionality of hazards. We compared the methods in terms of bias, empirical and average standard errors. We used IPD from the Meta-Analysis of Chemotherapy in Nasopharynx Carcinoma (MAC-NPC and its updated version MAC-NPC2 for illustration that included respectively 1,975 and 5,028 patients in 11 and 23 comparisons. Results The Naïve Kaplan-Meier method was unbiased, whereas the Pooled Exponential and, to a much lesser extent, the Pooled Kaplan-Meier methods showed a bias with non-proportional hazards. The Peto-quintile method underestimated the rmstD t ∗ $$ rmstD\\left({t}^{\\ast}\\right $$ , except with non-proportional hazards at t ∗ $$ {t}^{\\ast } $$ = 5 years. In the presence of treatment effect

  15. Geodetic Control Points, Benchmarks; Vertical elevation bench marks for monumented geodetic survey control points for which mean sea level elevations have been determined., Published in 1995, 1:24000 (1in=2000ft) scale, Rhode Island and Providence Plantations.

    Data.gov (United States)

    NSGIC State | GIS Inventory — Geodetic Control Points dataset current as of 1995. Benchmarks; Vertical elevation bench marks for monumented geodetic survey control points for which mean sea level...

  16. A national survey (NAP5-Ireland baseline) to estimate an annual incidence of accidental awareness during general anaesthesia in Ireland.

    LENUS (Irish Health Repository)

    Jonker, W R

    2014-06-29

    As part of the 5th National Audit Project of the Royal College of Anaesthetists and the Association of Anaesthetists of Great Britain and Ireland concerning accidental awareness during general anaesthesia, we issued a questionnaire to every consultant anaesthetist in each of 46 public hospitals in Ireland, represented by 41 local co-ordinators. The survey ascertained the number of new cases of accidental awareness becoming known to them for patients under their care or supervision for a calendar year, as well as their career experience. Consultants from all hospitals responded, with an individual response rate of 87% (299 anaesthetists). There were eight new cases of accidental awareness that became known to consultants in 2011; an estimated incidence of 1:23 366. Two out of the eight cases (25%) occurred at or after induction of anaesthesia, but before surgery; four cases (50%) occurred during surgery; and two cases (25%) occurred after surgery was complete, but before full emergence. Four cases were associated with pain or distress (50%), one after an experience at induction and three after experiences during surgery. There were no formal complaints or legal actions that arose in 2011 related to awareness. Depth of anaesthesia monitoring was reported to be available in 33 (80%) departments, and was used by 184 consultants (62%), 18 (6%) routinely. None of the 46 hospitals had a policy to prevent or manage awareness. Similar to the results of a larger survey in the UK, the disparity between the incidence of awareness as known to anaesthetists and that reported in trials warrants explanation. Compared with UK practice, there appears to be greater use of depth of anaesthesia monitoring in Ireland, although this is still infrequent.

  17. Estimation of Pap-test coverage in an area with an organised screening program: challenges for survey methods

    Directory of Open Access Journals (Sweden)

    Raggi Patrizio

    2006-03-01

    Full Text Available Abstract Background The cytological screening programme of Viterbo has completed the second round of invitations to the entire target population (age 25–64. From a public health perspective, it is important to know the Pap-test coverage rate and the use of opportunistic screening. The most commonly used study design is the survey, but the validity of self-reports and the assumptions made about non respondents are often questioned. Methods From the target population, 940 women were sampled, and responded to a telephone interview about Pap-test utilisation. The answers were compared with the screening program registry; comparing the dates of Pap-tests reported by both sources. Sensitivity analyses were performed for coverage over a 36-month period, according to various assumptions regarding non respondents. Results The response rate was 68%. The coverage over 36 months was 86.4% if we assume that non respondents had the same coverage as respondents, 66% if we assume they were not covered at all, and 74.6% if we adjust for screening compliance in the non respondents. The sensitivity and specificity of the question, "have you ever had a Pap test with the screening programme" were 84.5% and 82.2% respectively. The test dates reported in the interview tended to be more recent than those reported in the registry, but 68% were within 12 months of each other. Conclusion Surveys are useful tools to understand the effectiveness of a screening programme and women's self-report was sufficiently reliable in our setting, but the coverage estimates were strongly influenced by the assumptions we made regarding non respondents.

  18. Multilevel model to estimate county-level untreated dental caries among US children aged 6-9years using the National Health and Nutrition Examination Survey.

    Science.gov (United States)

    Lin, Mei; Zhang, Xingyou; Holt, James B; Robison, Valerie; Li, Chien-Hsun; Griffin, Susan O

    2018-06-01

    Because conducting population-based oral health screening is resource intensive, oral health data at small-area levels (e.g., county-level) are not commonly available. We applied the multilevel logistic regression and poststratification method to estimate county-level prevalence of untreated dental caries among children aged 6-9years in the United States using data from the National Health and Nutrition Examination Survey (NHANES) 2005-2010 linked with various area-level data at census tract, county and state levels. We validated model-based national estimates against direct estimates from NHANES. We also compared model-based estimates with direct estimates from select State Oral Health Surveys (SOHS) at state and county levels. The model with individual-level covariates only and the model with individual-, census tract- and county-level covariates explained 7.2% and 96.3% respectively of overall county-level variation in untreated caries. Model-based county-level prevalence estimates ranged from 4.9% to 65.2% with median of 22.1%. The model-based national estimate (19.9%) matched the NHANES direct estimate (19.8%). We found significantly positive correlations between model-based estimates for 8-year-olds and direct estimates from the third-grade State Oral Health Surveys (SOHS) at state level for 34 states (Pearson coefficient: 0.54, P=0.001) and SOHS estimates at county level for 53 New York counties (Pearson coefficient: 0.38, P=0.006). This methodology could be a useful tool to characterize county-level disparities in untreated dental caries among children aged 6-9years and complement oral health surveillance to inform public health programs especially when local-level data are not available although the lack of external validation due to data unavailability should be acknowledged. Published by Elsevier Inc.

  19. Estimated Trans-Lamina Cribrosa Pressure Differences in Low-Teen and High-Teen Intraocular Pressure Normal Tension Glaucoma: The Korean National Health and Nutrition Examination Survey.

    Directory of Open Access Journals (Sweden)

    Si Hyung Lee

    Full Text Available To investigate the association between estimated trans-lamina cribrosa pressure difference (TLCPD and prevalence of normal tension glaucoma (NTG with low-teen and high-teen intraocular pressure (IOP using a population-based study design.A total of 12,743 adults (≥ 40 years of age who participated in the Korean National Health and Nutrition Examination Survey (KNHANES from 2009 to 2012 were included. Using a previously developed formula, cerebrospinal fluid pressure (CSFP in mmHg was estimated as 0.55 × body mass index (kg/m2 + 0.16 × diastolic blood pressure (mmHg-0.18 × age (years-1.91. TLCPD was calculated as IOP-CSFP. The NTG subjects were divided into two groups according to IOP level: low-teen NTG (IOP ≤ 15 mmHg and high-teen NTG (15 mmHg < IOP ≤ 21 mmHg groups. The association between TLCPD and the prevalence of NTG was assessed in the low- and high-teen IOP groups.In the normal population (n = 12,069, the weighted mean estimated CSFP was 11.69 ± 0.04 mmHg and the weighted mean TLCPD 2.31 ± 0.06 mmHg. Significantly higher TLCPD (p < 0.001; 6.48 ± 0.27 mmHg was found in the high-teen NTG compared with the normal group. On the other hand, there was no significant difference in TLCPD between normal and low-teen NTG subjects (p = 0.395; 2.31 ± 0.06 vs. 2.11 ± 0.24 mmHg. Multivariate logistic regression analysis revealed that TLCPD was significantly associated with the prevalence of NTG in the high-teen IOP group (p = 0.006; OR: 1.09; 95% CI: 1.02, 1.15, but not the low-teen IOP group (p = 0.636. Instead, the presence of hypertension was significantly associated with the prevalence of NTG in the low-teen IOP group (p < 0.001; OR: 1.65; 95% CI: 1.26, 2.16.TLCPD was significantly associated with the prevalence of NTG in high-teen IOP subjects, but not low-teen IOP subjects, in whom hypertension may be more closely associated. This study suggests that the underlying mechanisms may differ between low-teen and high-teen NTG patients.

  20. Non-inductive components of electromagnetic signals associated with L'Aquila earthquake sequences estimated by means of inter-station impulse response functions

    Directory of Open Access Journals (Sweden)

    C. Di Lorenzo

    2011-04-01

    Full Text Available On 6 April 2009 at 01:32:39 UT a strong earthquake occurred west of L'Aquila at the very shallow depth of 9 km. The main shock local magnitude was Ml = 5.8 (Mw = 6.3. Several powerful aftershocks occurred the following days. The epicentre of the main shock occurred 6 km away from the Geomagnetic Observatory of L'Aquila, on a fault 15 km long having a NW-SE strike, about 140°, and a SW dip of about 42°. For this reason, L'Aquila seismic events offered very favourable conditions to detect possible electromagnetic emissions related to the earthquake. The data used in this work come from the permanent geomagnetic Observatories of L'Aquila and Duronia. Here the results concerning the analysis of the residual magnetic field estimated by means of the inter-station impulse response functions in the frequency band from 0.3 Hz to 3 Hz are shown.

  1. Suicidal behaviours: Prevalence estimates from the second Australian Child and Adolescent Survey of Mental Health and Wellbeing.

    Science.gov (United States)

    Zubrick, Stephen R; Hafekost, Jennifer; Johnson, Sarah E; Lawrence, David; Saw, Suzy; Sawyer, Michael; Ainley, John; Buckingham, William J

    2016-09-01

    To (1) estimate the lifetime and 12-month prevalence of suicidal behaviours in Australian young people aged 12-17 years, (2) describe their co-morbidity with mental illness and (3) describe the co-variation of these estimates with social and demographic variables. A national random sample of children aged 4-17 years was recruited in 2013-2014. The response rate to the survey was 55% with 6310 parents and carers of eligible households participating. In addition, of the 2967 young people aged 11-17 years in these households, 89% (2653) of the 12- to 17-year-olds completed a self-report questionnaire that included questions about suicidal behaviour. In any 12-month period, about 2.4% or 41,400 young people would have made a suicide attempt. About 7.5% of 12- to 17-year-olds report having suicidal ideation, 5.2% making a plan and less than 1% (0.6%) receiving medical treatment for an attempt. The presence of a mental disorder shows the largest significant association with lifetime and 12-month suicidal behaviour, along with age, gender, sole parent family status and poor family functioning. Of young people with a major depressive disorder, 19.7% reported making a suicide attempt within the previous 12 months. There are also significant elevations in the proportions of young people repor