H.E. Anderson; J. Breidenbach
2007-01-01
Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...
Institute of Scientific and Technical Information of China (English)
孙道德
2003-01-01
Basing on the multi nomial distribution and its nature, the paper conducts analysis of method of non-equal probability classified cluster sample survey and the un-bias estimation of mean, and further researches and provides the deviation square of the estimation and the un-bias estimation of the deviation square.
Annoyance survey by means of social media.
Silva, Bruno; Santos, Gustavo; Eller, Rogeria; Gjestland, Truls
2017-02-01
Social surveys have been the conventional means of evaluating the annoyance caused by transportation noise. Sampling and interviewing by telephone, mail, or in person are often costly and time consuming, however. Data collection by web-based survey methods are less costly and may be completed more quickly, and hence, could be conducted in countries with fewer resources. Such methods, however, raise issues about the generalizability and comparability of findings. These issues were investigated in a study of the annoyance of aircraft noise exposure around Brazil's Guarulhos Airport. The findings of 547 interviews obtained with the aid of Facebook advertisements and web-based forms were analysed with respect to estimated aircraft noise exposure levels at respondents' residences. The results were analysed to assess whether and how web-based surveys might yield generalizable noise dose-response relationships.
Estimating stellar mean density through seismic inversions
Reese, D R; Goupil, M J; Thompson, M J; Deheuvels, S
2012-01-01
Determining the mass of stars is crucial both to improving stellar evolution theory and to characterising exoplanetary systems. Asteroseismology offers a promising way to estimate stellar mean density. When combined with accurate radii determinations, such as is expected from GAIA, this yields accurate stellar masses. The main difficulty is finding the best way to extract the mean density from a set of observed frequencies. We seek to establish a new method for estimating stellar mean density, which combines the simplicity of a scaling law while providing the accuracy of an inversion technique. We provide a framework in which to construct and evaluate kernel-based linear inversions which yield directly the mean density of a star. We then describe three different inversion techniques (SOLA and two scaling laws) and apply them to the sun, several test cases and three stars. The SOLA approach and the scaling law based on the surface correcting technique described by Kjeldsen et al. (2008) yield comparable result...
Mean Velocity Estimation of Viscous Debris Flows
Institute of Scientific and Technical Information of China (English)
Hongjuan Yang; Fangqiang Wei; Kaiheng Hu
2014-01-01
The mean velocity estimation of debris flows, especially viscous debris flows, is an impor-tant part in the debris flow dynamics research and in the design of control structures. In this study, theoretical equations for computing debris flow velocity with the one-phase flow assumption were re-viewed and used to analyze field data of viscous debris flows. Results show that the viscous debris flow is difficult to be classified as a Newtonian laminar flow, a Newtonian turbulent flow, a Bingham fluid, or a dilatant fluid in the strict sense. However, we can establish empirical formulas to compute its mean velocity following equations for Newtonian turbulent flows, because most viscous debris flows are tur-bulent. Factors that potentially influence debris flow velocity were chosen according to two-phase flow theories. Through correlation analysis and data fitting, two empirical formulas were proposed. In the first one, velocity is expressed as a function of clay content, flow depth and channel slope. In the second one, a coefficient representing the grain size nonuniformity is used instead of clay content. Both formu-las can give reasonable estimate of the mean velocity of the viscous debris flow.
Recent advance in Mean Sea Surface estimates
Pujol, M. I.; Gerald, D.; Claire, D.; Raynal, M.; Faugere, Y.; Picot, N.; Guillot, A.
2016-12-01
Gridded Mean Sea Surface (MSS) estimate is an important issue for precise SLA computation along geodetic orbits. Previous studies emphasized that the error from MSS models older than Jason-1 GM was substantial: on average more than 10 to 15% of the SLA variance for wavelengths ranging from 30 to 150 km. Other MSS have been released this last 2 years, and they use geodetic missions such as CryoSat-2 and Jason-1 GM which strongly contribute to improve their resolution and accuracy.We evaluate in this paper the improvements of the recent MSS. This study, mainly based on spectral approach allows us to quantify the errors at various wavelengths. The use of new missions (e.g. SARAL-DP/AltiKa; Sentinel-3A) with low instrumental noise measurement levels (Ka, SAR) opens new perspectives to understand the MSS errors and improve MSS estimate for wavelengths lower than 100km.
U.S. Geological Survey, Department of the Interior — As part of the U.S. Geological Survey Groundwater Resources Program study of Appalachian Plateaus aquifers, mean-annual and mean-seasonal water-budget estimates for...
A FAMILY OF ESTIMATORS FOR ESTIMATING POPULATION MEAN IN STRATIFIED SAMPLING UNDER NON-RESPONSE
Directory of Open Access Journals (Sweden)
Manoj K. Chaudhary
2009-01-01
Full Text Available Khoshnevisan et al. (2007 proposed a general family of estimators for population mean using known value of some population parameters in simple random sampling. The objective of this paper is to propose a family of combined-type estimators in stratified random sampling adapting the family of estimators proposed by Khoshnevisan et al. (2007 under non-response. The properties of proposed family have been discussed. We have also obtained the expressions for optimum sample sizes of the strata in respect to cost of the survey. Results are also supported by numerical analysis.
Estimates from two survey designs: national hospital discharge survey.
Haupt, B J; Kozak, L J
1992-05-01
The methodology for the National Hospital Discharge Survey (NHDS) has been revised in several ways. These revisions, which were implemented for the 1988 NHDS, included adoption of a different hospital sampling frame, changes in the sampling design (in particular the implementation of a three-stage design), increased use of data purchased from abstracting service organizations, and adjustments to the estimation procedures used to derive the national estimates. To investigate the effects of these revisions on the estimates of hospital use from the NHDS, data were collected from January through March of 1988 using both the old and the new survey methods. This study compared estimates based on the old and the new survey methods for a variety of hospital and patient characteristics. Although few estimates were identical across survey methodologies, most of the variations could be attributed to sampling error. Estimates from two different samples of the same population would be expected to vary by chance even if precisely the same methods were used to collect and process the data. Because probability samples were used for the old and new survey methodologies, sampling error could be measured. Approximate relative standard errors were calculated for the estimates using the old and new survey methods. Taking these errors into account, less than 10 percent of the estimates were found to differ across survey methodologies at the 0.05 level of significance. Because a large number of comparisons were made, 5 percent of the estimates could have been found to be significantly different by chance alone. When there were statistically significant differences in nonmedical data, the new methods appeared to produce more accurate estimates than the old methods did. Race was more likely to be reported using the new methods. "New" estimates for hospitals in the West Region and government-owned hospitals were more similar than the corresponding "old" estimates to data from the census of
Mean estimation in highly skewed samples
Energy Technology Data Exchange (ETDEWEB)
Pederson, S P
1991-09-01
The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.
Estimating spatial attribute means in a GIS environment
Institute of Scientific and Technical Information of China (English)
CHRISTAKOS; George
2010-01-01
The estimation of geographical attributes is a crucial matter for many real-world problems,and the issue of accuracy stands out when the estimation is used for between-regions comparison.In this work,our concern is area attribute estimation in a GIS environment.We estimate the area attribute value with a mean Kriging technique,and the probability distribution of the estimate is derived.This is the best linear unbiased observed spatial population mean estimate and can be used in more relaxed situations than the block Kriging technique.Both theoretical analysis and empirical study show that the mean Kriging technique outperforms the ordinary Kriging,spatial random sampling,and simple random sampling techniques in estimating the observable spatial population mean across space.
Stereological estimation of nuclear mean volume in invasive meningiomas
DEFF Research Database (Denmark)
Madsen, C; Schrøder, H D
1996-01-01
A stereological estimation of nuclear mean volume in bone and brain invasive meningiomas was made. For comparison the nuclear mean volume of benign meningiomas was estimated. The aim was to investigate whether this method could discriminate between these groups. We found that the nuclear mean...... volume in the bone and brain invasive meningiomas was larger than in the benign tumors. The difference was significant and moreover it was seen that there was no overlap between the two groups. In the bone invasive meningiomas the nuclear mean volume appeared to be larger inside than outside the bone...
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2013-08-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.
A mean curvature estimate for cylindrically bounded submanifolds
Alias, Luis J
2010-01-01
We extend the estimate obtained in [1] for the mean curvature of a cylindrically bounded proper submanifold in a product manifold with an Euclidean space as one factor to a general product ambient space endowed with a warped product structure.
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Directory of Open Access Journals (Sweden)
Wenjuan Gong
2016-11-01
Full Text Available Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing. Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.
ASYMPTOTICS OF MEAN TRANSFORMATION ESTIMATORS WITH ERRORS IN VARIABLES MODEL
Institute of Scientific and Technical Information of China (English)
CUI Hengjian
2005-01-01
This paper addresses estimation and its asymptotics of mean transformation θ = E[h(X)] of a random variable X based on n iid. Observations from errors-in-variables model Y = X + v, where v is a measurement error with a known distribution and h(.) is a known smooth function. The asymptotics of deconvolution kernel estimator for ordinary smooth error distribution and expectation extrapolation estimator are given for normal error distribution respectively. Under some mild regularity conditions, the consistency and asymptotically normality are obtained for both type of estimators. Simulations show they have good performance.
Sensitivity to Estimation Errors in Mean-variance Models
Institute of Scientific and Technical Information of China (English)
Zhi-ping Chen; Cai-e Zhao
2003-01-01
In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.
Minimum Mean Square Error Estimation Under Gaussian Mixture Statistics
Flam, John T; Kansanen, Kimmo; Ekman, Torbjorn
2011-01-01
This paper investigates the minimum mean square error (MMSE) estimation of x, given the observation y = Hx+n, when x and n are independent and Gaussian Mixture (GM) distributed. The introduction of GM distributions, represents a generalization of the more familiar and simpler Gaussian signal and Gaussian noise instance. We present the necessary theoretical foundation and derive the MMSE estimator for x in a closed form. Furthermore, we provide upper and lower bounds for its mean square error (MSE). These bounds are validated through Monte Carlo simulations.
Modified pendulum model for mean step length estimation.
González, Rafael C; Alvarez, Diego; López, Antonio M; Alvarez, Juan C
2007-01-01
Step length estimation is an important issue in areas such as gait analysis, sport training or pedestrian localization. It has been shown that the mean step length can be computed by means of a triaxial accelerometer placed near the center of gravity of the human body. Estimations based on the inverted pendulum model are prone to underestimate the step length, and must be corrected by calibration. In this paper we present a modified pendulum model in which all the parameters correspond to anthropometric data of the individual. The method has been tested with a set of volunteers, both males and females. Experimental results show that this method provides an unbiased estimation of the actual displacement with a standard deviation lower than 2.1%.
Application of linear mean-square estimation in ocean engineering
Wang, Li-ping; Chen, Bai-yu; Chen, Chao; Chen, Zheng-shou; Liu, Gui-lin
2016-03-01
The attempt to obtain long-term observed data around some sea areas we concern is usually very hard or even impossible in practical offshore and ocean engineering situations. In this paper, by means of linear mean-square estimation method, a new way to extend short-term data to long-term ones is developed. The long-term data about concerning sea areas can be constructed via a series of long-term data obtained from neighbor oceanographic stations, through relevance analysis of different data series. It is effective to cover the insufficiency of time series prediction method's overdependence upon the length of data series, as well as the limitation of variable numbers adopted in multiple linear regression model. The storm surge data collected from three oceanographic stations located in Shandong Peninsula are taken as examples to analyze the number-selection effect of reference oceanographic stations (adjacent to the concerning sea area) and the correlation coefficients between sea sites which are selected for reference and for engineering projects construction respectively. By comparing the N-year return-period values which are calculated from observed raw data and processed data which are extended from finite data series by means of the linear mean-square estimation method, one can draw a conclusion that this method can give considerably good estimation in practical ocean engineering, in spite of different extreme value distributions about raw and processed data.
A FAMILY OF ESTIMATORS FOR ESTIMATING POPULATION MEAN IN STRATIFIED SAMPLING UNDER NON-RESPONSE
Chaudhary, Manoj K.; RAJESH SINGH; Rakesh K. Shukla; MUKESH KUMAR; FLORENTIN SMARANDACHE
2015-01-01
Khoshnevisan et al. (2007) proposed a general family of estimators for population mean using known value of some population parameters in simple random sampling. The objective of this paper is to propose a family of combined-type estimators in stratified random sampling adapting the family of estimators proposed by Khoshnevisan et al. (2007) under non-response. The properties of proposed family have been discussed. We have also obtained the expressions for optimum sampl...
A FAMILY OF ESTIMATORS FOR ESTIMATING POPULATION MEAN IN STRATIFIED SAMPLING UNDER NON-RESPONSE
Chaudhary, Manoj K.; Rajesh Singh; Rakesh K. Shukla; Mukesh Kumar; Florentin Smarandache
2009-01-01
Khoshnevisan et al. (2007) proposed a general family of estimators for population mean using known value of some population parameters in simple random sampling. The objective of this paper is to propose a family of combined-type estimators in stratified random sampling adapting the family of estimators proposed by Khoshnevisan et al. (2007) under non-response. The properties of proposed family have been discussed. We have also obtained the expressions for optimum sample sizes of the strata i...
Minimum Mean-Square Error Single-Channel Signal Estimation
DEFF Research Database (Denmark)
Beierholm, Thomas
2008-01-01
are expressed and in the way the estimator is approximated. The starting point of the first method is prior probability density functions for both signal and noise and it is assumed that their Laplace transforms (moment generating functions) are available. The corresponding posterior mean integral that defines...... inference is performed by particle filtering. The speech model is a time-varying auto-regressive model reparameterized by formant frequencies and bandwidths. The noise is assumed non-stationary and white. Compared to the case of using the AR coefficients directly then it is found very beneficial to perform...... particle filtering using the reparameterized speech model because it is relative straightforward to exploit prior information about formant features. A modified MMSE estimator is introduced and performance of the particle filtering algorithm is compared to a state of the art hearing aid noise reduction...
Mean square convergence rates for maximum quasi-likelihood estimator
Directory of Open Access Journals (Sweden)
Arnoud V. den Boer
2015-03-01
Full Text Available In this note we study the behavior of maximum quasilikelihood estimators (MQLEs for a class of statistical models, in which only knowledge about the first two moments of the response variable is assumed. This class includes, but is not restricted to, generalized linear models with general link function. Our main results are related to guarantees on existence, strong consistency and mean square convergence rates of MQLEs. The rates are obtained from first principles and are stronger than known a.s. rates. Our results find important application in sequential decision problems with parametric uncertainty arising in dynamic pricing.
Mean Field Games Models-A Brief Survey
Gomes, Diogo A.
2013-11-20
The mean-field framework was developed to study systems with an infinite number of rational agents in competition, which arise naturally in many applications. The systematic study of these problems was started, in the mathematical community by Lasry and Lions, and independently around the same time in the engineering community by P. Caines, Minyi Huang, and Roland Malhamé. Since these seminal contributions, the research in mean-field games has grown exponentially, and in this paper we present a brief survey of mean-field models as well as recent results and techniques. In the first part of this paper, we study reduced mean-field games, that is, mean-field games, which are written as a system of a Hamilton-Jacobi equation and a transport or Fokker-Planck equation. We start by the derivation of the models and by describing some of the existence results available in the literature. Then we discuss the uniqueness of a solution and propose a definition of relaxed solution for mean-field games that allows to establish uniqueness under minimal regularity hypothesis. A special class of mean-field games that we discuss in some detail is equivalent to the Euler-Lagrange equation of suitable functionals. We present in detail various additional examples, including extensions to population dynamics models. This section ends with a brief overview of the random variables point of view as well as some applications to extended mean-field games models. These extended models arise in problems where the costs incurred by the agents depend not only on the distribution of the other agents, but also on their actions. The second part of the paper concerns mean-field games in master form. These mean-field games can be modeled as a partial differential equation in an infinite dimensional space. We discuss both deterministic models as well as problems where the agents are correlated. We end the paper with a mean-field model for price impact. © 2013 Springer Science+Business Media New York.
Variable selection and estimation for longitudinal survey data
Wang, Li
2014-09-01
There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships or to identify significant variables in regression settings. This paper develops a general strategy for the model selection problem in longitudinal sample surveys. A survey weighted penalized estimating equation approach is proposed to select significant variables and estimate the coefficients simultaneously. The proposed estimators are design consistent and perform as well as the oracle procedure when the correct submodel was known. The estimating function bootstrap is applied to obtain the standard errors of the estimated parameters with good accuracy. A fast and efficient variable selection algorithm is developed to identify significant variables for complex longitudinal survey data. Simulated examples are illustrated to show the usefulness of the proposed methodology under various model settings and sampling designs. © 2014 Elsevier Inc.
Estimating trends in the global mean temperature record
Poppick, Andrew; Moyer, Elisabeth J.; Stein, Michael L.
2017-06-01
Given uncertainties in physical theory and numerical climate simulations, the historical temperature record is often used as a source of empirical information about climate change. Many historical trend analyses appear to de-emphasize physical and statistical assumptions: examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for internal variability in nonparametric rather than parametric ways. However, given a limited data record and the presence of internal variability, estimating radiatively forced temperature trends in the historical record necessarily requires some assumptions. Ostensibly empirical methods can also involve an inherent conflict in assumptions: they require data records that are short enough for naive trend models to be applicable, but long enough for long-timescale internal variability to be accounted for. In the context of global mean temperatures, empirical methods that appear to de-emphasize assumptions can therefore produce misleading inferences, because the trend over the twentieth century is complex and the scale of temporal correlation is long relative to the length of the data record. We illustrate here how a simple but physically motivated trend model can provide better-fitting and more broadly applicable trend estimates and can allow for a wider array of questions to be addressed. In particular, the model allows one to distinguish, within a single statistical framework, between uncertainties in the shorter-term vs. longer-term response to radiative forcing, with implications not only on historical trends but also on uncertainties in future projections. We also investigate the consequence on inferred uncertainties of the choice of a statistical description of internal variability. While nonparametric methods may seem to avoid making explicit assumptions, we demonstrate how even misspecified parametric statistical methods, if attuned to the
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean
A Bayesian model for estimating population means using a link-tracing sampling design.
St Clair, Katherine; O'Connell, Daniel
2012-03-01
Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied.
Clinical aspects of hemimegalencephaly by means of a nationwide survey.
Sasaki, Masayuki; Hashimoto, Toshiaki; Furushima, Wakana; Okada, Mari; Kinoshita, Satoru; Fujikawa, Yoshinao; Sugai, Kenji
2005-04-01
We surveyed Japanese patients with hemimegalencephaly by means of a questionnaire. Clinical findings, including intellectual and motor function levels and epileptic symptoms, were investigated. All 44 patients (28 males and 16 females) with hemimegalencephaly were sporadic. Sixteen patients had underlying neurocutaneous syndromes. The number of patients with right-sided hemimegalencephaly (n = 29) was almost twice that of patients with left-sided hemimegalencephaly (n = 15). Forty-one patients had mental retardation and hemiparesis and 14 patients were bedridden. All patients had epileptic seizures, which first appeared within a month in 18 cases and within 6 months in 11 cases. In 42 patients, magnetic resonance imaging revealed both cortical and white-matter abnormalities in the affected hemisphere. Antiepileptic drugs were not very effective. Fifteen patients were surgically treated. Eleven patients underwent functional hemispherectomy, which resulted in fairly good seizure control and improved development. There is a correlation between the onset of epilepsy and the degree of clinical severity of motor deficit and intellectual level. Neither underlying disorders nor laterality of the affected side was related to the degree of clinical severity.
Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation
Directory of Open Access Journals (Sweden)
Wuttichai Srisodaphol
2012-01-01
Full Text Available This paper is to find the estimators of the mean θ for a normal distribution with mean θ and variance aθ2, a>0, θ>0. These estimators are proposed when the coefficient of variation is known. A mean square error (MSE is a criterion to evaluate the estimators. The results show that the proposed estimators have preference for asymptotic comparisons. Moreover, the estimator based on jackknife technique has preference over others proposed estimators with some simulations studies.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. © 2011, The International Biometric Society.
Estimation of the mean energy of muons in multilayer detectors
Barnaveli, A T; Khaldeeva, I V; Eristavi, N A
1995-01-01
The technique of muon mean energy determination in multilayer detectors is developed. The mean energy is measured by means of average small bursts m i.e. the number of electrons and positrons generated by muons in the detecting layers of device via three basic processes --- creation of e^+e^- pairs, \\delta-electrons and bremsestrahlung. The accuracy of the method is considered. Key words: muon energy, multilayer detectors.
LEAF AREA ESTIMATION IN LITCHI BY MEANS OF ALLOMETRIC RELATIONSHIPS
Directory of Open Access Journals (Sweden)
PABLO SOUTO OLIVEIRA
Full Text Available ABSTRACT Obtaining leaf area is critical in several agronomic studies, being one of the important instruments to assess plant growth. The aim of this study was to estimate equations and select the most appropriate in determining leaf area in litchi (Litchi chinensis Sonn.. From the linear dimensions of length (L and maximum width (W of leaf limb, equations were estimated using linear, quadratic, potential and exponential models. The linear regression equation using the product of the length by maximum width, given by Y = 0.2885 + 0.662 (L.W is the one that best expresses the leaf area estimation of litchi tree.
Directory of Open Access Journals (Sweden)
Jambulingam Subramani
2013-10-01
Full Text Available The present paper deals with a modified ratio estimator for estimation of population mean of the study variable when the population median of the auxiliary variable is known. The bias and mean squared error of the proposed estimator are derived and are compared with that of existing modified ratio estimators for certain known populations. Further we have also derived the conditions for which the proposed estimator performs better than the existing modified ratio estimators. From the numerical study it is also observed that the proposed modified ratio estimator performs better than the existing modified ratio estimators for certain known populations.
Estimation of a multivariate mean under model selection uncertainty
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2014-05-01
Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty. When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.
On the Performance of Principal Component Liu-Type Estimator under the Mean Square Error Criterion
Directory of Open Access Journals (Sweden)
Jibo Wu
2013-01-01
Full Text Available Wu (2013 proposed an estimator, principal component Liu-type estimator, to overcome multicollinearity. This estimator is a general estimator which includes ordinary least squares estimator, principal component regression estimator, ridge estimator, Liu estimator, Liu-type estimator, r-k class estimator, and r-d class estimator. In this paper, firstly we use a new method to propose the principal component Liu-type estimator; then we study the superior of the new estimator by using the scalar mean squares error criterion. Finally, we give a numerical example to show the theoretical results.
A Novel and Simple Means to Estimate Asteroid Thermal Inertia
Drube, Line; Harris, Alan
2016-10-01
Calculating accurate values of thermal inertia for asteroids is a difficult process requiring a shape model, thermal-infrared observations of the object obtained over broad ranges of rotation period and aspect angle, and detailed thermophysical modeling. Consequently, reliable thermal inertia values are currently available for relatively few asteroids. On the basis of simple asteroid thermal modeling we have developed an empirical relationship enabling the thermal inertia of an asteroid to be estimated given adequate measurements of its thermal-infrared continuum and knowledge of its spin vector. In particular, our thermal-inertia estimator can be applied to hundreds of objects in the WISE cryogenic archive (limited by the availability of spin vectors). To test the accuracy of our thermal-inertia estimator we have used it to estimate thermal inertia for near-Earth asteroids, main-belt asteroids, Centaurs, and trans-Neptunian objects with known thermal inertia values derived from detailed thermophysical modeling. In nearly all cases the estimates agree within the error bars with the values derived from thermophysical modeling.
Being surveyed can change later behavior and related parameter estimates
Zwane, Alix Peterson; Zinman, Jonathan; Van Dusen, Eric; Pariente, William; Null, Clair; Miguel, Edward; Kremer, Michael; Hornbeck, Richard; Giné, Xavier; Duflo, Esther; Devoto, Florencia; Crepon, Bruno; Banerjee, Abhijit
2011-01-01
Does completing a household survey change the later behavior of those surveyed? In three field studies of health and two of microlending, we randomly assigned subjects to be surveyed about health and/or household finances and then measured subsequent use of a related product with data that does not rely on subjects' self-reports. In the three health experiments, we find that being surveyed increases use of water treatment products and take-up of medical insurance. Frequent surveys on reported diarrhea also led to biased estimates of the impact of improved source water quality. In two microlending studies, we do not find an effect of being surveyed on borrowing behavior. The results suggest that limited attention could play an important but context-dependent role in consumer choice, with the implication that researchers should reconsider whether, how, and how much to survey their subjects. PMID:21245314
ESTIMATION OF INSULATOR CONTAMINATIONS BY MEANS OF REMOTE SENSING TECHNIQUE
Directory of Open Access Journals (Sweden)
G. Han
2016-06-01
Full Text Available The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD, digital elevation model (DEM, land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data. Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.
Estimation of Insulator Contaminations by Means of Remote Sensing Technique
Han, Ge; Gong, Wei; Cui, Xiaohui; Zhang, Miao; Chen, Jun
2016-06-01
The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC) is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD), digital elevation model (DEM), land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data). Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM) and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.
Improved Estimation of Subsurface Magnetic Properties using Minimum Mean-Square Error Methods
Energy Technology Data Exchange (ETDEWEB)
Saether, Bjoern
1997-12-31
This thesis proposes an inversion method for the interpretation of complicated geological susceptibility models. The method is based on constrained Minimum Mean-Square Error (MMSE) estimation. The MMSE method allows the incorporation of available prior information, i.e., the geometries of the rock bodies and their susceptibilities. Uncertainties may be included into the estimation process. The computation exploits the subtle information inherent in magnetic data sets in an optimal way in order to tune the initial susceptibility model. The MMSE method includes a statistical framework that allows the computation not only of the estimated susceptibilities, given by the magnetic measurements, but also of the associated reliabilities of these estimations. This allows the evaluation of the reliabilities in the estimates before any measurements are made, an option, which can be useful for survey planning. The MMSE method has been tested on a synthetic data set in order to compare the effects of various prior information. When more information is given as input to the estimation, the estimated models come closer to the true model, and the reliabilities in their estimates are increased. In addition, the method was evaluated using a real geological model from a North Sea oil field, based on seismic data and well information, including susceptibilities. Given that the geometrical model is correct, the observed mismatch between the forward calculated magnetic anomalies and the measured anomalies causes changes in the susceptibility model, which may show features of interesting geological significance to the explorationists. Such magnetic anomalies may be due to small fractures and faults not detectable on seismic, or local geochemical changes due to the upward migration of water or hydrocarbons. 76 refs., 42 figs., 18 tabs.
Directory of Open Access Journals (Sweden)
Nubé Maarten
2009-06-01
Full Text Available Abstract Background As poverty and hunger are basic yardsticks of underdevelopment and destitution, the need for reliable statistics in this domain is self-evident. While the measurement of poverty through surveys is relatively well documented in the literature, for hunger, information is much scarcer, particularly for adults, and very different methodologies are applied for children and adults. Our paper seeks to improve on this practice in two ways. One is that we estimate the prevalence of undernutrition in sub-Saharan Africa (SSA for both children and adults based on anthropometric data available at province or district level, and secondly, we estimate the mean calorie intake and implied calorie gap for SSA, also using anthropometric data on the same geographical aggregation level. Results Our main results are, first, that we find a much lower prevalence of hunger than presented in the Millennium Development reports (17.3% against 27.8% for the continent as a whole. Secondly, we find that there is much less spread in mean calorie intake across the continent than reported by the Food and Agricultural Organization (FAO in the State of Food and Agriculture, 2007, the only estimate that covers the whole of Africa. While FAO estimates for calorie availability vary from a low of 1760 Kcal/capita/day for Central Africa to a high of 2825 Kcal/capita/day for Southern Africa, our estimates lay in a range of 2245 Kcal/capita/day (Eastern Africa to 2618 Kcal/capita/day for Southern Africa. Thirdly, we validate the main data sources used (the Demographic and Health Surveys by comparing them over time and with other available data sources for various countries. Conclusion We conclude that the picture of Africa that emerges from anthropometric data is much less negative than that usually presented. Especially for Eastern and Central Africa, the nutritional status is less critical than commonly assumed and also mean calorie intake is higher, which implies
Mean Value Estimates of the Error Terms of Lehmer Problem
Indian Academy of Sciences (India)
Dongmei Ren; Yaming Lu
2010-09-01
Let be an odd prime and be an integer coprime with . Denote by $N(a,p)$ the number of pairs of integers , with $bc≡ a(\\mathrm{mod} p),1≤ b, c < p$ and with , having different parity. The main purpose of this paper is to study the mean square value problem of $N(a,p)-\\frac{1}{2}(p-1)$ over interval (, +] with , positive integers by using the analytic methods, and finally by obtaining a sharp asymptotic formula.
Eigenvalue estimates for submanifolds with bounded $f$ -mean curvature
Indian Academy of Sciences (India)
GUANGYUE HUANG; BINGQING MA
2017-04-01
In this paper, we obtain an extrinsic low bound to the first non-zero eigenvalue of the $f$ -Laplacian on complete noncompact submanifolds of the weighted Riemannian manifold ($H^{m}(−1), e^{−f} dv$) with respect to the f -mean curvature. In particular, our results generalize those of Cheung and Leung in $\\it{Math}. \\bf{Z. 236}$ (2001) 525–530.
Directory of Open Access Journals (Sweden)
Akhtar R. Siddique
2000-03-01
Full Text Available This paper develops a filtering-based framework of non-parametric estimation of parameters of a diffusion process from the conditional moments of discrete observations of the process. This method is implemented for interest rate data in the Eurodollar and long term bond markets. The resulting estimates are then used to form non-parametric univariate and bivariate interest rate models and compute prices for the short term Eurodollar interest rate futures options and long term discount bonds. The bivariate model produces prices substantially closer to the market prices. This paper develops a filtering-based framework of non-parametric estimation of parameters of a diffusion process from the conditional moments of discrete observations of the process. This method is implemented for interest rate data in the Eurodollar and long term bond markets. The resulting estimates are then used to form non-parametric univariate and bivariate interest rate models and compute prices for the short term Eurodollar interest rate futures options and long term discount bonds. The bivariate model produces prices substantially closer to the market prices.
An alternative procedure for estimating the population mean in simple random sampling
Directory of Open Access Journals (Sweden)
Housila P. Singh
2012-03-01
Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.
Revising probability estimates: Why increasing likelihood means increasing impact.
Maglio, Sam J; Polman, Evan
2016-08-01
Forecasted probabilities rarely stay the same for long. Instead, they are subject to constant revision-moving upward or downward, uncertain events become more or less likely. Yet little is known about how people interpret probability estimates beyond static snapshots, like a 30% chance of rain. Here, we consider the cognitive, affective, and behavioral consequences of revisions to probability forecasts. Stemming from a lay belief that revisions signal the emergence of a trend, we find in 10 studies (comprising uncertain events such as weather, climate change, sex, sports, and wine) that upward changes to event-probability (e.g., increasing from 20% to 30%) cause events to feel less remote than downward changes (e.g., decreasing from 40% to 30%), and subsequently change people's behavior regarding those events despite the revised event-probabilities being the same. Our research sheds light on how revising the probabilities for future events changes how people manage those uncertain events. (PsycINFO Database Record
Distance estimation experiment for aerial minke whale surveys
Directory of Open Access Journals (Sweden)
Lars Witting
2009-09-01
Full Text Available A comparative study between aerial cue–counting and digital photography surveys for minke whales conducted in Faxaﬂói Bay in September 2003 is used to check the perpendicular distances estimated by the cue-counting observers. The study involved 2 aircraft with the photo plane at 1,700 feet flying above the cue–counting plane at 750 feet. The observer–based distance estimates were calculated from head angles estimated by angle-boards and declination angles estimated by declinometers. These distances were checked against image–based estimates of the perpendicular distance to the same whale. The 2 independent distance estimates were obtained for 21 sightings of minke whale, and there was a good agreement between the 2 types of estimates. The relative absolute deviations between the 2 estimates were on average 23% (se: 6%, with the errors in the observer–based distance estimates resembling that of a log-normal distribution. The linear regression of the observer–based estimates (obs on the image–based estimates (img was Obs=1.1Img (R2=0.85 with an intercept ﬁxed at zero. There was no evidence of a distance estimation bias that could generate a positive bias in the absolute abundance estimated by cue–counting.
Directory of Open Access Journals (Sweden)
Patrick Habecker
Full Text Available Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations via traditional survey tools such as telephone or mail surveys--by asking a representative sample to estimate the number of people they know who are members of such a "hidden" subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation "trimming" to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.
Comparative Study of Complex Survey Estimation Software in ONS
Directory of Open Access Journals (Sweden)
Andy Fallows
2015-09-01
Full Text Available Many official statistics across the UK Government Statistical Service (GSS are produced using data collected from sample surveys. These survey data are used to estimate population statistics through weighting and calibration techniques. For surveys with complex or unusual sample designs, the weighting can be fairly complicated. Even in more simple cases, appropriate software is required to implement survey weighting and estimation. As with other stages of the survey process, it is preferable to use a standard, generic calibration tool wherever possible. Standard tools allow for efficient use of resources and assist with the harmonisation of methods. In the case of calibration, the Office for National Statistics (ONS has experience of using the Statistics Canada Generalized Estimation System (GES across a range of business and social surveys. GES is a SAS-based system and so is only available in conjunction with an appropriate SAS licence. Given recent initiatives and encouragement to investigate open source solutions across government, it is appropriate to determine whether there are any open source calibration tools available that can provide the same service as GES. This study compares the use of GES with the calibration tool ‘R evolved Generalized software for sampling estimates and errors in surveys’ (ReGenesees available in R, an open source statistical programming language which is beginning to be used in many statistical offices. ReGenesees is a free R package which has been developed by the Italian statistics office (Istat and includes functionality to calibrate survey estimates using similar techniques to GES. This report describes analysis of the performance of ReGenesees in comparison to GES to calibrate a representative selection of ONS surveys. Section 1.1 provides a brief introduction to the current use of SAS and R in ONS. Section 2 describes GES and ReGenesees in more detail. Sections 3.1 and 3.2 consider methods for
Estimating mean plant cover from different types of cover data: a coherent statistical framework
National Research Council Canada - National Science Library
Damgaard, C
2014-01-01
Plant cover is measured by different methods and it is important to be able to estimate mean cover and to compare estimates of plant cover across different sampling methods in a coherent statistical framework...
WAVELET-BASED ESTIMATORS OF MEAN REGRESSION FUNCTION WITH LONG MEMORY DATA
Institute of Scientific and Technical Information of China (English)
LI Lin-yuan; XIAO Yi-min
2006-01-01
This paper provides an asymptotic expansion for the mean integrated squared error (MISE) of nonlinear wavelet-based mean regression function estimators with long memory data. This MISE expansion, when the underlying mean regression function is only piecewise smooth, is the same as analogous expansion for the kernel estimators.However, for the kernel estimators, this MISE expansion generally fails if the additional smoothness assumption is absent.
Jackson, C; Jatulis, D E; Fortmann, S P
1992-01-01
BACKGROUND. Nearly all state health departments collect Behavioral Risk Factor Survey (BRFS) data, and many report using these data in public health planning. Although the BRFS is widely used, little is known about its measurement properties. This study compares the cardiovascular risk behavior estimates of the BRFS with estimates derived from the physiological and interview data of the Stanford Five-City Project Survey (FCPS). METHOD. The BRFS is a random telephone sample of 1588 adults aged 25 to 64; the FCPS is a random household sample of 1512 adults aged 25 to 64. Both samples were drawn from the same four California communities. RESULTS. The surveys produced comparable estimates for measures of current smoking, number of cigarettes smoked per day, rate of ever being told one has high blood pressure, rate of prescription of blood pressure medications, compliance in taking medications, and mean total cholesterol. Significant differences were found for mean body mass index, rates of obesity, and, in particular, rate of controlled hypertension. CONCLUSIONS. These differences indicate that, for some risk variables, the BRFS has limited utility in assessing public health needs and setting public health objectives. A formal validation study is needed to test all the risk behavior estimates measured by this widely used instrument. PMID:1536358
Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal
2015-01-01
Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261
Estimation of finite population mean with known coefficient of variation of an auxiliary character
Directory of Open Access Journals (Sweden)
Housila P. Singh
2007-10-01
Full Text Available This paper deals with the problem of estimating population mean Y of the study variate y using information on population mean X and coefficient of variation Cx of an auxiliary character x. We have suggested an estimator for Y and its properties are studied in the context of single sampling. It is shown that the proposed estimator is more efficient than Sisodia and Dwivedi (1981 estimator and Pandey and Dubey (1988 estimator under some realistic conditions. An empirical study is carried out to examine the merits of the constructed estimator over others.
Ishikawa, Tetsuo; Yasumura, Seiji; Ozasa, Kotaro; Kobashi, Gen; Yasuda, Hiroshi; Miyazaki, Makoto; Akahane, Keiichi; Yonai, Shunsuke; Ohtsuru, Akira; Sakai, Akira; Sakata, Ritsu; Kamiya, Kenji; Abe, Masafumi
2015-01-01
The Fukushima Health Management Survey (including the Basic Survey for external dose estimation and four detailed surveys) was launched after the Fukushima Dai-ichi Nuclear Power Plant accident. The Basic Survey consists of a questionnaire that asks Fukushima Prefecture residents about their behavior in the first four months after the accident; and responses to the questionnaire have been returned from many residents. The individual external doses are estimated by using digitized behavior data and a computer program that included daily gamma ray dose rate maps drawn after the accident. The individual external doses of 421,394 residents for the first four months (excluding radiation workers) had a distribution as follows: 62.0%, <1 mSv; 94.0%, <2 mSv; 99.4%, <3 mSv. The arithmetic mean and maximum for the individual external doses were 0.8 and 25 mSv, respectively. While most dose estimation studies were based on typical scenarios of evacuation and time spent inside/outside, the Basic Survey estimated doses considering individually different personal behaviors. Thus, doses for some individuals who did not follow typical scenarios could be revealed. Even considering such extreme cases, the estimated external doses were generally low and no discernible increased incidence of radiation-related health effects is expected. PMID:26239643
The estimation of 550 km x 550 km mean gravity anomalies. [from free atmosphere gravimetry data
Williamson, M. R.; Gaposchkin, E. M.
1975-01-01
The calculation of 550 km X 550 km mean gravity anomalies from 1 degree X 1 degree mean free-air gravimetry data is discussed. The block estimate procedure developed by Kaula was used, and estimates for 1452 of the 1654 blocks were obtained.
Estimated mean annual natural ground-water recharge in the conterminous United States
U.S. Geological Survey, Department of the Interior — This 1-kilometer resolution raster (grid) dataset is an index of mean annual natural ground-water recharge. The dataset was created by multiplying a grid of...
Directory of Open Access Journals (Sweden)
O. F. Shikhova
2012-01-01
Full Text Available The paper considers the research findings aimed at the developing the new quality testing technique for students assessment at Technical Higher School. The model of multilevel estimation means is provided for diagnosing the level of general cultural and professional competences of students doing a bachelor degree in technological fields. The model implies the integrative character of specialists training - the combination of both the psycho-pedagogic (invariable and engineering (variable components, as well as the qualimetric approach substantiating the system of students competence estimation and providing the most adequate assessment means. The principles of designing the multilevel estimation means are defined along with the methodology approaches to their implementation. For the reasonable selection of estimation means, the system of quality criteria is proposed by the authors, being based on the group expert assessment. The research findings can be used for designing the competence-oriented estimation means.
New aerial survey and hierarchical model to estimate manatee abundance
Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.
2011-01-01
Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability
Global mean estimation using a self-organizing dual-zoning method for preferential sampling.
Pan, Yuchun; Ren, Xuhong; Gao, Bingbo; Liu, Yu; Gao, YunBing; Hao, Xingyao; Chen, Ziyue
2015-03-01
Giving an appropriate weight to each sampling point is essential to global mean estimation. The objective of this paper was to develop a global mean estimation method with preferential samples. The procedure for this estimation method was to first zone the study area based on self-organizing dual-zoning method and then to estimate the mean according to stratified sampling method. In this method, spreading of points in both feature and geographical space is considered. The method is tested in a case study on the metal Mn concentrations in Jilin provinces of China. Six sample patterns are selected to estimate the global mean and compared with the global mean calculated by direct arithmetic mean method, polygon method, and cell method. The results show that the proposed method produces more accurate and stable mean estimates under different feature deviation index (FDI) values and sample sizes. The relative errors of the global mean calculated by the proposed method are from 0.14 to 1.47 % and they are the largest (4.83-8.84 %) by direct arithmetic mean method. At the same time, the mean results calculated by the other three methods are sensitive to the FDI values and sample sizes.
Directory of Open Access Journals (Sweden)
Subhash Kumar Yadav,
2014-01-01
Full Text Available This manuscript deals with the estimation of population mean of the variable under study using an improved ratio type estimator utilizing the known values of median and coefficient of variation of auxiliary variable. The expressions for the bias and mean square error (MSE of the proposed estimator are obtained up to the first order of approximation. The optimum estimator is also obtained for the optimum value of the constant of the estimator and its optimum properties are also studied. It is shown that the proposed estimator is better than the existing ratio estimators in the literature. For the justification of the improvement of the proposed estimator over others, an empirical study is also carried out.
Revised estimation of 550-km times 550-km mean gravity anomalies
Williamson, M. R.
1977-01-01
The calculation of 550-km x 550-km mean gravity anomalies from 1 degree x 1 degree mean free-air gravimetry data is discussed. The block estimate procedure developed by Kaula is used to obtain 1,504 of the 1,654 possible mean block anomalies. The estimated block anomalies calculated from 1 deg x 1 deg mean anomalies referred to the reference ellipsoid and from 1 degree x 1 degree mean anomalies referred to a 24th-degree-and-order field are compared.
Gazoorian, Christopher L.
2015-01-01
The lakes, rivers, and streams of New York State provide an essential water resource for the State. The information provided by time series hydrologic data is essential to understanding ways to promote healthy instream ecology and to strengthen the scientific basis for sound water management decision making in New York. The U.S. Geological Survey, in cooperation with The Nature Conservancy and the New York State Energy Research and Development Authority, has developed the New York Streamflow Estimation Tool to estimate a daily mean hydrograph for the period from October 1, 1960, to September 30, 2010, at ungaged locations across the State. The New York Streamflow Estimation Tool produces a complete estimated daily mean time series from which daily flow statistics can be estimated. In addition, the New York Streamflow Estimation Tool provides a means for quantitative flow assessments at ungaged locations that can be used to address the objectives of the Clean Water Act—to restore and maintain the chemical, physical, and biological integrity of the Nation’s waters.
Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea
2014-03-15
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.
Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea
2014-01-01
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289
Suresh K. Shrestha; Robert C. Burns
2012-01-01
We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...
Estimating trends in alligator populations from nightlight survey data
Fujisaki, Ikuko; Mazzotti, Frank J.; Dorazio, Robert M.; Rice, Kenneth G.; Cherkiss, Michael; Jeffery, Brian
2011-01-01
Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.
Five Year Mean Surface Chlorophyll Estimates in the Northern Gulf of Mexico for 2005 through 2009
National Oceanic and Atmospheric Administration, Department of Commerce — These images were created by combining the mean surface chlorophyll estimates to produce seasonal representations for winter, spring, summer and fall. Winter...
DIDRO Project – New means for surveying dikes and similar flood defense structures
Directory of Open Access Journals (Sweden)
Miquel Thibaut
2016-01-01
Full Text Available The 36-month-long DIDRO project seeks to bring existing developments on remote sensing by drone on the scope of the project: dikes surveys as routine inspection or in relation to a flood crisis. Drone offers a new complementary means of surveying which can map broad areas efficiently while being more flexible and easier to operate than other airborne means. The system shall consist in a drone vector, in dedicated sensors (such as LiDAR, visible, near-infrared and thermal infrared optics, in data processing models with analytics specific to dikes surveying and finally in a GIS capitalizing all appropriate data to dikes managers.
Mean likelihood estimation of target micro-motion parameters in laser detection
Guo, Liren; Hu, Yihua; Wang, Yunpeng
2016-10-01
Maximum Likelihood Estimation(MLE) is the optimal estimator for Micro-Doppler feature extracting. However, the enormous computational burden of the grid search and the existence of many local maxima of the respective highly nonlinear cost function are harmful for accurate estimation. A new method combining the Mean Likelihood Estimation(MELE) and the Monte Carlo(MC) way is proposed to solve this problem. A closed-form expression to evaluate the parameters which maximize the cost function is derived. Then the compressed likelihood function is designed to obtain the global maximum. Finally the parameters are estimated by calculating the circular mean of the samples get from MC method. The high dependence of accurate initials and the computational complexity of the iteration algorithms are avoided in this method. Applied to the simulated and experimental data, the proposed method achieves similar performance as MLE but less computational amount. Meanwhile, this method guarantees the global convergence and joint parameter estimation.
On Optimal Multichannel Mean-Squared Error Estimators for Speech Enhancement
Hendriks, R.C.; Heusdens, R.; Kjems, U.; Jensen, J.
2009-01-01
In this letter we present discrete Fourier transform (DFT) domain minimum mean-squared error (MMSE) estimators for multichannel noise reduction. The estimators are derived assuming that the clean speech magnitude DFT coefficients are generalized-Gamma distributed. We show that for Gaussian
Simulation-Extrapolation for Estimating Means and Causal Effects with Mismeasured Covariates
Lockwood, J. R.; McCaffrey, Daniel F.
2015-01-01
Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…
Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2013-01-01
Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.
Visibility Estimation for the CHARA/JouFLU Exozodi Survey
Nuñez, Paul D.; ten Brummelaar, Theo; Mennesson, Bertrand; Scott, Nicholas J.
2017-02-01
We discuss the estimation of the interferometric visibility (fringe contrast) for the Exozodi survey conducted at the CHARA array with the JouFLU beam combiner. We investigate the use of the statistical median to estimate the uncalibrated visibility from an ensemble of fringe exposures. Under a broad range of operating conditions, numerical simulations indicate that this estimator has a smaller bias compared with other estimators. We also propose an improved method for calibrating visibilities, which not only takes into account the time interval between observations of calibrators and science targets, but also the uncertainties of the calibrators’ raw visibilities. We test our methods with data corresponding to stars that do not display the exozodi phenomenon. The results of our tests show that the proposed method yields smaller biases and errors. The relative reduction in bias and error is generally modest, but can be as high as ∼ 20 % {--}40 % for the brightest stars of the CHARA data and statistically significant at the 95% confidence level (CL).
Directory of Open Access Journals (Sweden)
A. K. Katiyar, Akhilesh Kumar, C. K. Pandey, V. K. Katiyar, S. H. Abdi
2010-09-01
Full Text Available The time dependent monthly mean hourly diffuse solar radiation on a horizontal surface has been estimated for Lucknow (latitude26.75 degree, longitude 80.50 degree using least squares regression analysis. The monthly and annually regression constants are obtained. The present results are compared with the estimation of Orgill-Holands (Sol. Energy, 19 (4, 357 (1977, Erbs et. al (Sol. Energy 28 (4, 293-304(1982 and Spencer (Sol. Energy 29 (1, 19-32(1982 as well as with experimental value. The proposed constant provides better estimation for the entire year over others. Spencer, who correlate hourly diffuse fraction with clearness index, estimates lowest value except in summers when insolation in this region is very high. The accuracy of the regression constants are also checked with statistical tests of root mean square error (RMSE, mean bias error (MBE and t –statistic tests.
Age synthesis and estimation via faces: a survey.
Fu, Yun; Guo, Guodong; Huang, Thomas S
2010-11-01
Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.
Eyles, Helen; Neal, Bruce; Jiang, Yannan; Ni Mhurchu, Cliona
2016-05-28
Population exposure to food and nutrients can be estimated from household food purchases, but store surveys of foods and their composition are more available, less costly and might provide similar information. Our aim was to compare estimates of nutrient exposure from a store survey of packaged food with those from household panel food purchases. A cross-sectional store survey of all packaged foods for sale in two major supermarkets was undertaken in Auckland, New Zealand, between February and May 2012. Longitudinal household food purchase data (November 2011 to October 2012) were obtained from the nationally representative, population-weighted New Zealand Nielsen HomeScan® panel. Data on 8440 packaged food and non-alcoholic beverage products were collected in the store survey. Food purchase data were available for 1229 households and 16 812 products. Store survey data alone produced higher estimates of exposure to Na and sugar compared with estimates from household panel food purchases. The estimated mean difference in exposure to Na was 94 (95 % CI 72, 115) mg/100 g (20 % relative difference; Pfood purchases, store survey data provided a reasonable estimate of average population exposure to key nutrients from packaged foods. However, caution should be exercised in using such data to estimate population exposure to Na and sugar and in generalising these findings to other countries, as well as over time.
Pose Estimation for Augmented Reality: A Hands-On Survey.
Marchand, Eric; Uchiyama, Hideaki; Spindler, Fabien
2016-12-01
Augmented reality (AR) allows to seamlessly insert virtual objects in an image sequence. In order to accomplish this goal, it is important that synthetic elements are rendered and aligned in the scene in an accurate and visually acceptable way. The solution of this problem can be related to a pose estimation or, equivalently, a camera localization process. This paper aims at presenting a brief but almost self-contented introduction to the most important approaches dedicated to vision-based camera localization along with a survey of several extension proposed in the recent years. For most of the presented approaches, we also provide links to code of short examples. This should allow readers to easily bridge the gap between theoretical aspects and practical implementations.
The mean error estimation of TOPSIS method using a fuzzy reference models
Directory of Open Access Journals (Sweden)
Wojciech Sałabun
2013-04-01
Full Text Available The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS is a commonly used multi-criteria decision-making method. A number of authors have proposed improvements, known as extensions, of the TOPSIS method, but these extensions have not been examined with respect to accuracy. Accuracy estimation is very difficult because reference values for the obtained results are not known, therefore, the results of each extension are compared to one another. In this paper, the author propose a new method to estimate the mean error of TOPSIS with the use of a fuzzy reference model (FRM. This method provides reference values. In experiments involving 1,000 models, 28 million cases are simulated to estimate the mean error. Results of four commonly used normalization procedures were compared. Additionally, the author demonstrated the relationship between the value of the mean error and the nonlinearity of models and a number of alternatives.
Estimation of Areal Mean Rainfall in Remote Areas Using B-SHADE Model
Directory of Open Access Journals (Sweden)
Tao Zhang
2016-01-01
Full Text Available This study presented a method to estimate areal mean rainfall (AMR using a Biased Sentinel Hospital Based Area Disease Estimation (B-SHADE model, together with biased rain gauge observations and Tropical Rainfall Measuring Mission (TRMM data, for remote areas with a sparse and uneven distribution of rain gauges. Based on the B-SHADE model, the best linear unbiased estimation of AMR could be obtained. A case study was conducted for the Three-River Headwaters region in the Tibetan Plateau of China, and its performance was compared with traditional methods. The results indicated that B-SHADE obtained the least estimation biases, with a mean error and root mean square error of −0.63 and 3.48 mm, respectively. For the traditional methods including arithmetic average, Thiessen polygon, and ordinary kriging, the mean errors were 7.11, −1.43, and 2.89 mm, which were up to 1027.1%, 127.0%, and 358.3%, respectively, greater than for the B-SHADE model. The root mean square errors were 10.31, 4.02, and 6.27 mm, which were up to 196.1%, 15.5%, and 80.0%, respectively, higher than for the B-SHADE model. The proposed technique can be used to extend the AMR record to the presatellite observation period, when only the gauge data are available.
ESTIMATION OF MEAN IN PRESENCE OF MISSING DATA UNDER TWO-PHASE SAMPLING SCHEME
Directory of Open Access Journals (Sweden)
Narendra Singh Thakur
2011-01-01
Full Text Available To estimate the population mean with imputation i.e. the technique of substitutingmissing data, there are a number of techniques available in literature like Ratio method ofimputation, Compromised method of imputation, Mean method of imputation, Ahmed method ofimputation, F-T method of imputation, and so on. If population mean of auxiliary information isunknown then these methods are not useful and the two-phase sampling is used to obtain thepopulation mean. This paper presents some imputation methods of for missing values in twophasesampling. Two different sampling designs in two-phase sampling are compared underimputed data. The bias and m.s.e of suggested estimators are derived in the form of populationparameters using the concept of large sample approximation. Numerical study is performed overtwo populations using the expressions of bias and m.s.e and efficiency compared with Ahmedestimators.
Basin, M.; Maldonado, J. J.; Zendejo, O.
2016-07-01
This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.
Parameter Estimation in Mean Reversion Processes with Deterministic Long-Term Trend
Directory of Open Access Journals (Sweden)
Freddy H. Marín Sánchez
2016-01-01
Full Text Available This paper describes a procedure based on maximum likelihood technique in two phases for estimating the parameters in mean reversion processes when the long-term trend is defined by a continued deterministic function. Closed formulas for the estimators that depend on observations of discrete paths and an estimation of the expected value of the process are obtained in the first phase. In the second phase, a reestimation scheme is proposed when a priori knowledge exists of the long-term trend. Some experimental results using simulated data sets are graphically illustrated.
Estimation of average causal effect using the restricted mean residual lifetime as effect measure
DEFF Research Database (Denmark)
Mansourvar, Zahra; Martinussen, Torben
2016-01-01
with respect to their survival times. In observational studies where the factor of interest is not randomized, covariate adjustment is needed to take into account imbalances in confounding factors. In this article, we develop an estimator for the average causal treatment difference using the restricted mean...... residual lifetime as target parameter. We account for confounding factors using the Aalen additive hazards model. Large sample property of the proposed estimator is established and simulation studies are conducted in order to assess small sample performance of the resulting estimator. The method is also...
Estimation of Water Quality Parameters Using the Regression Model with Fuzzy K-Means Clustering
Directory of Open Access Journals (Sweden)
Muntadher A. SHAREEF
2014-07-01
Full Text Available the traditional methods in remote sensing used for monitoring and estimating pollutants are generally relied on the spectral response or scattering reflected from water. In this work, a new method has been proposed to find contaminants and determine the Water Quality Parameters (WQPs based on theories of the texture analysis. Empirical statistical models have been developed to estimate and classify contaminants in the water. Gray Level Co-occurrence Matrix (GLCM is used to estimate six texture parameters: contrast, correlation, energy, homogeneity, entropy and variance. These parameters are used to estimate the regression model with three WQPs. Finally, the fuzzy K-means clustering was used to generalize the water quality estimation on all segmented image. Using the in situ measurements and IKONOS data, the obtained results show that texture parameters and high resolution remote sensing able to monitor and predicate the distribution of WQPs in large rivers.
Testing a statistical method of global mean palotemperature estimations in a long climate simulation
Energy Technology Data Exchange (ETDEWEB)
Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik
2001-07-01
Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)
Estimating mean change in population salt intake using spot urine samples.
Petersen, Kristina S; Wu, Jason H Y; Webster, Jacqui; Grimes, Carley; Woodward, Mark; Nowson, Caryl A; Neal, Bruce
2016-10-14
: Spot urine samples are easier to collect than 24-h urine samples and have been used with estimating equations to derive the mean daily salt intake of a population. Whether equations using data from spot urine samples can also be used to estimate change in mean daily population salt intake over time is unknown. We compared estimates of change in mean daily population salt intake based upon 24-h urine collections with estimates derived using equations based on spot urine samples. : Paired and unpaired 24-h urine samples and spot urine samples were collected from individuals in two Australian populations, in 2011 and 2014. Estimates of change in daily mean population salt intake between 2011 and 2014 were obtained directly from the 24-h urine samples and by applying established estimating equations (Kawasaki, Tanaka, Mage, Toft, INTERSALT) to the data from spot urine samples. Differences between 2011 and 2014 were calculated using mixed models. : A total of 1000 participants provided a 24-h urine sample and a spot urine sample in 2011, and 1012 did so in 2014 (paired samples n = 870; unpaired samples n = 1142). The participants were community-dwelling individuals living in the State of Victoria or the town of Lithgow in the State of New South Wales, Australia, with a mean age of 55 years in 2011. The mean (95% confidence interval) difference in population salt intake between 2011 and 2014 determined from the 24-h urine samples was -0.48g/day (-0.74 to -0.21; P 0.058). Separate analysis of the unpaired and paired data showed that detection of change by the estimating equations was observed only in the paired data. : All the estimating equations based upon spot urine samples identified a similar change in daily salt intake to that detected by the 24-h urine samples. Methods based upon spot urine samples may provide an approach to measuring change in mean population salt intake, although further investigation in larger and more diverse population groups is
Assessing the impact of vertical land motion on twentieth century global mean sea level estimates
Hamlington, B. D.; Thompson, P.; Hammond, W. C.; Blewitt, G.; Ray, R. D.
2016-07-01
Near-global and continuous measurements from satellite altimetry have provided accurate estimates of global mean sea level in the past two decades. Extending these estimates further into the past is a challenge using the historical tide gauge records. Not only is sampling nonuniform in both space and time, but tide gauges are also affected by vertical land motion (VLM) that creates a relative sea level change not representative of ocean variability. To allow for comparisons to the satellite altimetry estimated global mean sea level (GMSL), typically the tide gauges are corrected using glacial isostatic adjustment (GIA) models. This approach, however, does not correct other sources of VLM that remain in the tide gauge record. Here we compare Global Positioning System (GPS) VLM estimates at the tide gauge locations to VLM estimates from GIA models, and assess the influence of non-GIA-related VLM on GMSL estimates. We find that the tide gauges, on average, are experiencing positive VLM (i.e., uplift) after removing the known effect of GIA, resulting in an increase of 0.24 ± 0.08 mm yr-1 in GMSL trend estimates from 1900 to present when using GPS-based corrections. While this result is likely dependent on the subset of tide gauges used and the actual corrections used, it does suggest that non-GIA VLM plays a significant role in twentieth century estimates of GMSL. Given the relatively short GPS records used to obtain these VLM estimates, we also estimate the uncertainty in the GMSL trend that results from limited knowledge of non-GIA-related VLM.
Stuckey, Marla H.; Koerkle, Edward H.; Ulrich, James E.
2012-01-01
Water-resource managers use daily mean streamflows to generate streamflow statistics and analyze streamflow conditions. An in-depth evaluation of flow regimes to promote instream ecological health often requires streamflow information obtainable only from a time series hydrograph. Historically, it has been difficult to estimate daily mean streamflow for an ungaged location. The U.S. Geological Survey (USGS), in cooperation with the Pennsylvania Department of Environmental Protection, Susquehanna River Basin Commission, and The Nature Conservancy, has developed the Baseline Streamflow Estimator (BaSE) to estimate baseline streamflow at a daily time scale for ungaged streams in Pennsylvania using data collected during water years 1960–2008. Baseline streamflow is minimally altered by regulation, diversion, or mining, and other anthropogenic activities. Daily mean streamflow is estimated in BaSE using a methodology that equates streamflow as a percentile from a flow duration curve for a particular day at an ungaged location with streamflow as a percentile from the flow duration curve for the same day at a reference streamgage that is considered to be hydrologically similar to the ungaged location. An appropriate reference streamgage is selected using map correlation, in which variogram models are developed that correlate streamflow at one streamgage with streamflows at all other streamgages. The percentiles from a flow duration curve for the ungaged location are converted to streamflow through the use of regression equations. Regression equations used to predict 17 flow-duration exceedance probabilities were developed for Pennsylvania using geographic information system-derived basin characteristics. The standard error of prediction for the regression equations ranged from 11 percent to 92 percent with the mean of 31 percent.
Directory of Open Access Journals (Sweden)
Manzoor Khan
2014-01-01
Full Text Available This paper presents new classes of estimators in estimating the finite population mean under double sampling in the presence of nonresponse when using information on fractional raw moments. The expressions for mean square error of the proposed classes of estimators are derived up to the first degree of approximation. It is shown that a proposed class of estimators performs better than the usual mean estimator, ratio type estimators, and Singh and Kumar (2009 estimator. An empirical study is carried out to demonstrate the performance of a proposed class of estimators.
Higher Order Mean Squared Error of Generalized Method of Moments Estimators for Nonlinear Models
Directory of Open Access Journals (Sweden)
Yi Hu
2014-01-01
Full Text Available Generalized method of moments (GMM has been widely applied for estimation of nonlinear models in economics and finance. Although generalized method of moments has good asymptotic properties under fairly moderate regularity conditions, its finite sample performance is not very well. In order to improve the finite sample performance of generalized method of moments estimators, this paper studies higher-order mean squared error of two-step efficient generalized method of moments estimators for nonlinear models. Specially, we consider a general nonlinear regression model with endogeneity and derive the higher-order asymptotic mean square error for two-step efficient generalized method of moments estimator for this model using iterative techniques and higher-order asymptotic theories. Our theoretical results allow the number of moments to grow with sample size, and are suitable for general moment restriction models, which contains conditional moment restriction models as special cases. The higher-order mean square error can be used to compare different estimators and to construct the selection criteria for improving estimator’s finite sample performance.
Local digital algorithms for estimating the mean integrated curvature of r-regular sets
DEFF Research Database (Denmark)
Svane, Anne Marie
Consider the design based situation where an r-regular set is sampled on a random lattice. A fast algorithm for estimating the integrated mean curvature based on this observation is to use a weighted sum of 2×⋯×2 configuration counts. We show that for a randomly translated lattice, no asymptotica......-or-miss transforms of r-regular sets....
Stereological estimation of the mean and variance of nuclear volume from vertical sections
DEFF Research Database (Denmark)
Sørensen, Flemming Brandt
1991-01-01
The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...
New a priori estimates for mean-field games with congestion
Evangelista, David
2016-01-06
We present recent developments in crowd dynamics models (e.g. pedestrian flow problems). Our formulation is given by a mean-field game (MFG) with congestion. We start by reviewing earlier models and results. Next, we develop our model. We establish new a priori estimates that give partial regularity of the solutions. Finally, we discuss numerical results.
Comparisons of Means for Estimating Sea States from an Advancing Large Container Ship
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam; Andersen, Ingrid Marie Vincent; Koning, Jos
2013-01-01
to ship-wave interactions in a seaway. In the paper, sea state estimates are produced by three means: the wave buoy analogy, relying on shipboard response measurements, a wave radar system, and a system providing the instantaneous wave height. The presented results show that for the given data, recorded...
ON THE SINGULARITY OF LEAST SQUARES ESTIMATOR FOR MEAN-REVERTING Α-STABLE MOTIONS
Institute of Scientific and Technical Information of China (English)
Hu Yaozhong; Long Hongwei
2009-01-01
We study the problem of parameter estimation for mean-reverting α-stable motion, dXt= (a0- θ0Xt)dt + dZt, observed at discrete time instants.A least squares estimator is obtained and its asymptotics is discussed in the singular case (a0, θ0)=(0,0).If a0=0, then the mean-reverting α-stable motion becomes Ornstein-Uhlenbeck process and is studied in [7] in the ergodie case θ0 > 0.For the Ornstein-Uhlenbeck process, asymptoties of the least squares estimators for the singular case (θ0 = 0) and for ergodic case (θ0 > 0) are completely different.
Change-point Estimation of a Mean Shift in Moving-average Processes Under Dependence Assumptions
Institute of Scientific and Technical Information of China (English)
Yun-xia Li
2006-01-01
In this paper we discuss the least-square estimator of the unknown change point in a mean shift for moving-average processes of ALNQD sequence. The consistency and the rate of convergence for the estimated change point are established. The asymptotic distribution for the change point estimator is obtained. The results are also true for ρ-mixing, ψ-mixing, α-mixing sequences under suitable conditions. These results extend those of Bai[1], who studied the mean shift point of a linear process of i.i.d. variables, and the condition ∞∑j=0j|aj|＜∞in Bai is weakened to∞∑j=0|aj|＜∞.
Directory of Open Access Journals (Sweden)
Magruder SF
2004-03-01
Full Text Available Abstract Background Surveillance of Over-the-Counter pharmaceutical (OTC sales as a potential early indicator of developing public health conditions, in particular in cases of interest to Bioterrorism, has been suggested in the literature. The data streams of interest are quite non-stationary and we address this problem from the viewpoint of linear adaptive filter theory: the clinical data is the primary channel which is to be estimated from the OTC data that form the reference channels. Method The OTC data are grouped into a few categories and we estimate the clinical data using each individual category, as well as using a multichannel filter that encompasses all the OTC categories. The estimation (in the least mean square sense is performed using an FIR (Finite Impulse Response filter and the normalized LMS algorithm. Results We show all estimation results and present a table of effectiveness of each OTC category, as well as the effectiveness of the combined filtering operation. Individual group results clearly show the effectiveness of each particular group in estimating the clinical hospital data and serve as a guide as to which groups have sustained correlations with the clinical data. Conclusion Our results indicate that Multichannle adaptive FIR least squares filtering is a viable means of estimating public health conditions from OTC sales, and provide quantitative measures of time dependent correlations between the clinical data and the OTC data channels.
Eash, David A.; Barnes, Kimberlee K.
2012-01-01
A statewide study was conducted to develop regression equations for estimating six selected low-flow frequency statistics and harmonic mean flows for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include: the annual 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years, the annual 30-day mean low flow for a recurrence interval of 5 years, and the seasonal (October 1 through December 31) 1- and 7-day mean low flows for a recurrence interval of 10 years. Estimation equations also were developed for the harmonic-mean-flow statistic. Estimates of these seven selected statistics are provided for 208 U.S. Geological Survey continuous-record streamgages using data through September 30, 2006. The study area comprises streamgages located within Iowa and 50 miles beyond the State's borders. Because trend analyses indicated statistically significant positive trends when considering the entire period of record for the majority of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. The median number of years of record used to compute each of these seven selected statistics was 35. Geographic information system software was used to measure 54 selected basin characteristics for each streamgage. Following the removal of two streamgages from the initial data set, data collected for 206 streamgages were compiled to investigate three approaches for regionalization of the seven selected statistics. Regionalization, a process using statistical regression analysis, provides a relation for efficiently transferring information from a group of streamgages in a region to ungaged sites in the region. The three regionalization approaches tested included statewide, regional, and region-of-influence regressions. For the regional regression, the study area was divided into three low-flow regions on the basis of hydrologic
Methods for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma
Esralew, Rachel A.; Smith, S. Jerrod
2010-01-01
Flow statistics can be used to provide decision makers with surface-water information needed for activities such as water-supply permitting, flow regulation, and other water rights issues. Flow statistics could be needed at any location along a stream. Most often, streamflow statistics are needed at ungaged sites, where no flow data are available to compute the statistics. Methods are presented in this report for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma. Flow statistics included the (1) annual (period of record), (2) seasonal (summer-autumn and winter-spring), and (3) 12 monthly duration statistics, including the 20th, 50th, 80th, 90th, and 95th percentile flow exceedances, and the annual mean-flow (mean of daily flows for the period of record). Flow statistics were calculated from daily streamflow information collected from 235 streamflow-gaging stations throughout Oklahoma and areas in adjacent states. A drainage-area ratio method is the preferred method for estimating flow statistics at an ungaged location that is on a stream near a gage. The method generally is reliable only if the drainage-area ratio of the two sites is between 0.5 and 1.5. Regression equations that relate flow statistics to drainage-basin characteristics were developed for the purpose of estimating selected flow-duration and annual mean-flow statistics for ungaged streams that are not near gaging stations on the same stream. Regression equations were developed from flow statistics and drainage-basin characteristics for 113 unregulated gaging stations. Separate regression equations were developed by using U.S. Geological Survey streamflow-gaging stations in regions with similar drainage-basin characteristics. These equations can increase the accuracy of regression equations used for estimating flow-duration and annual mean-flow statistics at ungaged stream locations in Oklahoma. Streamflow-gaging stations were grouped by selected drainage
Zimmerman, Guthrie S.; Sauer, John; Boomer, G. Scott; Devers, Patrick K.; Garrettson, Pamela R.
2017-01-01
The U.S. Fish and Wildlife Service (USFWS) uses data from the North American Breeding Bird Survey (BBS) to assist in monitoring and management of some migratory birds. However, BBS analyses provide indices of population change rather than estimates of population size, precluding their use in developing abundance-based objectives and limiting applicability to harvest management. Wood Ducks (Aix sponsa) are important harvested birds in the Atlantic Flyway (AF) that are difficult to detect during aerial surveys because they prefer forested habitat. We integrated Wood Duck count data from a ground-plot survey in the northeastern U.S. with AF-wide BBS, banding, parts collection, and harvest data to derive estimates of population size for the AF. Overlapping results between the smaller-scale intensive ground-plot survey and the BBS in the northeastern U.S. provided a means for scaling BBS indices to the breeding population size estimates. We applied these scaling factors to BBS results for portions of the AF lacking intensive surveys. Banding data provided estimates of annual survival and harvest rates; the latter, when combined with parts-collection data, provided estimates of recruitment. We used the harvest data to estimate fall population size. Our estimates of breeding population size and variability from the integrated population model (N̄ = 0.99 million, SD = 0.04) were similar to estimates of breeding population size based solely on data from the AF ground-plot surveys and the BBS (N̄ = 1.01 million, SD = 0.04) from 1998 to 2015. Integrating BBS data with other data provided reliable population size estimates for Wood Ducks at a scale useful for harvest and habitat management in the AF, and allowed us to derive estimates of important demographic parameters (e.g., seasonal survival rates, sex ratio) that were not directly informed by data.
Random action of compact Lie groups and minimax estimation of a mean pattern
Bigot, Jérémie; Gadat, Sebastien
2011-01-01
This paper considers the problem of estimating a mean pattern in the setting of Grenander's pattern theory. Shape variability in a data set of curves or images is modeled by the random action of elements in a compact Lie group on an infinite dimensional space. In the case of observations contaminated by an additive Gaussian white noise, it is shown that estimating a reference template in the setting of Grenander's pattern theory falls into the category of deconvolution problems over Lie groups. To obtain this result, we build an estimator of a mean pattern by using Fourier deconvolution and harmonic analysis on compact Lie groups. In an asymptotic setting where the number of observed curves or images tends to infinity, we derive upper and lower bounds for the minimax quadratic risk over Sobolev balls. This rate depends on the smoothness of the density of the random Lie group elements representing shape variability in the data, which makes a connection between estimating a mean pattern and standard deconvoluti...
National Research Council Canada - National Science Library
Williamson, Laura D; Brookes, Kate L; Scott, Beth E; Graham, Isla M; Bradbury, Gareth; Hammond, Philip S; Thompson, Paul M; McPherson, Jana
2016-01-01
...‐based visual surveys. Surveys of cetaceans using acoustic loggers or digital cameras provide alternative methods to estimate relative density that have the potential to reduce cost and provide a verifiable record of all detections...
Stuckey, Marla H.
2016-06-09
The ability to characterize baseline streamflow conditions, compare them with current conditions, and assess effects of human activities on streamflow is fundamental to water-management programs addressing water allocation, human-health issues, recreation needs, and establishment of ecological flow criteria. The U.S. Geological Survey, through the National Water Census, has developed the Delaware River Basin Streamflow Estimator Tool (DRB-SET) to estimate baseline (minimally altered) and altered (affected by regulation, diversion, mining, or other anthropogenic activities) and altered streamflow at a daily time step for ungaged stream locations in the Delaware River Basin for water years 1960–2010. Daily mean baseline streamflow is estimated by using the QPPQ method to equate streamflow expressed as a percentile from the flow-duration curve (FDC) for a particular day at an ungaged stream location with the percentile from a FDC for the same day at a hydrologically similar gaged location where streamflow is measured. Parameter-based regression equations were developed for 22 exceedance probabilities from the FDC for ungaged stream locations in the Delaware River Basin. Water use data from 2010 is used to adjust the baseline daily mean streamflow generated from the QPPQ method at ungaged stream locations in the Delaware River Basin to reflect current, or altered, conditions. To evaluate the effectiveness of the overall QPPQ method contained within DRB-SET, a comparison of observed and estimated daily mean streamflows was performed for 109 reference streamgages in and near the Delaware River Basin. The Nash-Sutcliffe efficiency (NSE) values were computed as a measure of goodness of fit. The NSE values (using log10 streamflow values) ranged from 0.22 to 0.98 (median of 0.90) for 45 streamgages in the Upper Delaware River Basin and from -0.37 to 0.98 (median of 0.79) for 41 streamgages in the Lower Delaware River Basin.
A physically-based hybrid framework to estimate daily-mean surface fluxes over complex terrain
Huang, Hsin-Yuan; Hall, Alex
2016-06-01
In this study we developed and examined a hybrid modeling approach integrating physically-based equations and statistical downscaling to estimate fine-scale daily-mean surface turbulent fluxes (i.e., sensible and latent heat fluxes) for a region of southern California that is extensively covered by varied vegetation types over a complex terrain. The selection of model predictors is guided by physical parameterizations of surface flux used in land surface models and analysis showing net shortwave radiation that is a major source of variability in the surface energy budget. Through a structure of multivariable regression processes with an application of near-surface wind estimates from a previous study, we successfully reproduce dynamically-downscaled 3 km resolution surface flux data. The overall error in our estimates is less than 20 % for both sensible and latent heat fluxes, while slightly larger errors are seen in high-altitude regions. The major sources of error in estimates include the limited information provided in coarse reanalysis data, the accuracy of near-surface wind estimates, and an ignorance of the nonlinear diurnal cycle of surface fluxes when using daily-mean data. However, with reasonable and acceptable errors, this hybrid modeling approach provides promising, fine-scale products of surface fluxes that are much more accurate than reanalysis data, without performing intensive dynamical simulations.
American Community Survey (ACS) 5-Year Estimates for Coastal Geographies
National Oceanic and Atmospheric Administration, Department of Commerce — The American Community Survey (ACS) is an ongoing statistical survey that samples a small percentage of the population every year. These data have been apportioned...
A double-observer method to estimate detection rate during aerial waterfowl surveys
Koneff, M.D.; Royle, J. Andrew; Otto, M.C.; Wortham, J.S.; Bidwell, J.K.
2008-01-01
We evaluated double-observer methods for aerial surveys as a means to adjust counts of waterfowl for incomplete detection. We conducted our study in eastern Canada and the northeast United States utilizing 3 aerial-survey crews flying 3 different types of fixed-wing aircraft. We reconciled counts of front- and rear-seat observers immediately following an observation by the rear-seat observer (i.e., on-the-fly reconciliation). We evaluated 6 a priori models containing a combination of several factors thought to influence detection probability including observer, seat position, aircraft type, and group size. We analyzed data for American black ducks (Anas rubripes) and mallards (A. platyrhynchos), which are among the most abundant duck species in this region. The best-supported model for both black ducks and mallards included observer effects. Sample sizes of black ducks were sufficient to estimate observer-specific detection rates for each crew. Estimated detection rates for black ducks were 0.62 (SE = 0.10), 0.63 (SE = 0.06), and 0.74 (SE = 0.07) for pilot-observers, 0.61 (SE = 0.08), 0.62 (SE = 0.06), and 0.81 (SE = 0.07) for other front-seat observers, and 0.43 (SE = 0.05), 0.58 (SE = 0.06), and 0.73 (SE = 0.04) for rear-seat observers. For mallards, sample sizes were adequate to generate stable maximum-likelihood estimates of observer-specific detection rates for only one aerial crew. Estimated observer-specific detection rates for that crew were 0.84 (SE = 0.04) for the pilot-observer, 0.74 (SE = 0.05) for the other front-seat observer, and 0.47 (SE = 0.03) for the rear-seat observer. Estimated observer detection rates were confounded by the position of the seat occupied by an observer, because observers did not switch seats, and by land-cover because vegetation and landform varied among crew areas. Double-observer methods with on-the-fly reconciliation, although not without challenges, offer one viable option to account for detection bias in aerial waterfowl
Directory of Open Access Journals (Sweden)
Thomas P Eisele
Full Text Available Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.
Energy Technology Data Exchange (ETDEWEB)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.
Joint mean angle of arrival, angular and Doppler spreads estimation in macrocell environments
Rejeb, Nessrine Ben; Bousnina, Inès; Ben Salah, Mohamed Bassem; Samet, Abdelaziz
2014-12-01
In this paper, we propose a new low-complexity joint estimator of the mean angle of arrival (AoA), the angular spread (AS), and the maximum Doppler spread (DS) for single-input multiple-output (SIMO) wireless channel configurations in a macrocell environment. The non-line-of-sight (NLOS) case is considered. The space-time correlation matrix is used to jointly estimate the three parameters. Closed-form expressions are developed for the desired parameters using the modules and the phases of the cross-correlation coefficients. Simulation results show that our approach offers a better tradeoff between computational complexity and accuracy than the most recent estimators in the literature.
An Improved Weise’s Rule for Efficient Estimation of Stand Quadratic Mean Diameter
Directory of Open Access Journals (Sweden)
Róbert Sedmák
2015-07-01
Full Text Available The main objective of this study was to explore the accuracy of Weise’s rule of thumb applied to an estimation of the quadratic mean diameter of a forest stand. Virtual stands of European beech (Fagus sylvatica L. across a range of structure types were stochastically generated and random sampling was simulated. We compared the bias and accuracy of stand quadratic mean diameter estimates, employing different ranks of measured stems from a set of the 10 trees nearest to the sampling point. We proposed several modifications of the original Weise’s rule based on the measurement and averaging of two different ranks centered to a target rank. In accordance with the original formulation of the empirical rule, we recommend the application of the measurement of the 6th stem in rank corresponding to the 55% sample percentile of diameter distribution, irrespective of mean diameter size and degree of diameter dispersion. The study also revealed that the application of appropriate two-measurement modifications of Weise’s method, the 4th and 8th ranks or 3rd and 9th ranks averaged to the 6th central rank, should be preferred over the classic one-measurement estimation. The modified versions are characterised by an improved accuracy (about 25% without statistically significant bias and measurement costs comparable to the classic Weise method.
Global-mean marine δ13C and its uncertainty in a glacial state estimate
Gebbie, Geoffrey; Peterson, Carlye D.; Lisiecki, Lorraine E.; Spero, Howard J.
2015-10-01
A paleo-data compilation with 492 δ13C and δ18O observations provides the opportunity to better sample the Last Glacial Maximum (LGM) and infer its global properties, such as the mean δ13C of dissolved inorganic carbon. Here, the paleo-compilation is used to reconstruct a steady-state water-mass distribution for the LGM, that in turn is used to map the data onto a 3D global grid. A global-mean marine δ13C value and a self-consistent uncertainty estimate are derived using the framework of state estimation (i.e., combining a numerical model and observations). The LGM global-mean δ13C is estimated to be 0.14‰ ± 0.20‰ at the two standard error level, giving a glacial-to-modern change of 0.32‰ ± 0.20‰. The magnitude of the error bar is attributed to the uncertain glacial ocean circulation and the lack of observational constraints in the Pacific, Indian, and Southern Oceans. To halve the error bar, roughly four times more observations are needed, although strategic sampling may reduce this number. If dynamical constraints can be used to better characterize the LGM circulation, the error bar can also be reduced to 0.05 to 0.1‰, emphasizing that knowledge of the circulation is vital to accurately map δ13C in three dimensions.
Directory of Open Access Journals (Sweden)
Daniel G Pike
2009-09-01
Full Text Available North Atlantic Sightings Surveys for cetaceans were carried out Northeast and Central Atlantic in 1987, 1989, 1995 and 2001. Here we provide estimates of density and abundance for minke whales from the Faroese and Icelandic ship surveys. The estimates are not corrected for availability or perception biases. Double platform data collected in 2001 indicates that perception bias is likely considerable for this species. However comparison of corrected estimates of densityfrom aerial surveys with a ship survey estimate from the same area suggests that ship surveys can be nearly unbiased under optimal survey conditions with high searching effort. There were some regional changes in density over the period but no overall changes in density and abundance. Given the recent catch history for minke whales in this area, we would not expect to see changes in abundance due to exploitation that would be detectable with these surveys.
Sample size for estimating the mean concentration of organisms in ballast water.
Costa, Eliardo G; Lopes, Rubens M; Singer, Julio M
2016-09-15
We consider the computation of sample sizes for estimating the mean concentration of organisms in ballast water. Given the possible heterogeneity of their distribution in the tank, we adopt a negative binomial model to obtain confidence intervals for the mean concentration. We show that the results obtained by Chen and Chen (2012) in a different set-up hold for the proposed model and use them to develop algorithms to compute sample sizes both in cases where the mean concentration is known to lie in some bounded interval or where there is no information about its range. We also construct simple diagrams that may be easily employed to decide for compliance with the D-2 regulation of the International Maritime Organization (IMO). Copyright © 2016 Elsevier Ltd. All rights reserved.
Simplified inverse filter tracking algorithm for estimating the mean trabecular bone spacing.
Huang, Kai; Ta, Dean; Wang, Weiqi; Le, L H
2008-07-01
Ultrasonic backscatter signals provide useful information relevant to bone tissue characterization. Trabecular bone microstructures have been considered as quasi-periodic tissues with a collection of regular and diffuse scatterers. This paper investigates the potential of a novel technique using a simplified inverse filter tracking (SIFT) algorithm to estimate mean trabecular bone spacing (MTBS) from ultrasonic backscatter signals. In contrast to other frequency-based methods, the SIFT algorithm is a time-based method and utilizes the amplitude and phase information of backscatter echoes, thus retaining the advantages of both the autocorrelation and the cepstral analysis techniques. The SIFT algorithm was applied to backscatter signals from simulations, phantoms, and bovine trabeculae in vitro. The estimated MTBS results were compared with those of the autoregressive (AR) cepstrum and quadratic transformation (QT) . The SIFT estimates are better than the AR cepstrum estimates and are comparable with the QT values. The study demonstrates that the SIFT algorithm has the potential to be a reliable and robust method for the estimation of MTBS in the presence of a small signal-to-noise ratio, a large spacing variation between regular scatterers, and a large scattering strength ratio of diffuse scatterers to regular ones.
Neural Network based Software Effort Estimation: A Survey
Muhammad Waseem Khan; Imran Qureshi
2014-01-01
Software effort estimation is used to estimate how many resources and how many hours are required to develop a software project. The accurate and reliable prediction is the key to success of a project. There are numerous mechanisms in software effort estimation but accurate prediction is still a challenge for the researchers and software project managers. In this paper, the use of Neural Network techniques for Software Effort Estimation is discussed and evaluate on the basis of MMRE and Predi...
Directory of Open Access Journals (Sweden)
Manoj J. Gundalia
2013-11-01
Full Text Available The abstract should summarize the content of the paper. Try to keep the abstract below 200 words. Do not make references nor display equations in the abstract. The journal will be printed from the same-sized copy prepared by you. Your manuscript should be printed on A4 paper (21.0 cm x 29.7 cm. It is imperative that the margins The significance of major meteorological factors, that influence the evaporation were evaluated at daily time-scale for monsoon season using the data from Junagadh station, Gujarat (India. The computed values were compared. The solar radiation and mean air temperature were found to be the significant factors influencing pan evaporation (Ep. The negative correlation was found between relative humidity and (Ep, while wind speed, vapour pressure deficit and bright sunshine hours were found least correlated and no longer remained controlling factors influencing (Ep. The objective of the present study is to compare and evaluate the performance of six different methods based on temperature and radiation to select the most appropriate equations for estimating (Ep. The three quantitative standard statistical performance evaluation measures, coefficient of determination (R2 root mean square of errors-observations standard deviation ratio (RSR and Nash-Sutcliffe efficiency coefficient (E are employed as performance criteria. The results show that the Jensen equation yielded the most reliable results in estimation of (Ep and it can be recommended for estimating (Ep for monsoon season in the study region.
Ormeño, A.
2012-01-01
Do survey data on inflation expectations contain useful information for estimating macroeconomic models? I address this question by using survey data in the New Keynesian model by Smets and Wouters (2007) to estimate and compare its performance when solved under the assumptions of Rational
Martin, Gary R.; Fowler, Kathleen K.; Arihood, Leslie D.
2016-09-06
prediction of these regression equations ranges from 55.7 to 61.5 percent.Regional weighted-least-squares regression equations were developed for estimating the harmonic-mean flows by dividing the State into three low-flow regions. The Northern region uses total drainage area and the average transmissivity of the entire thickness of unconsolidated deposits as explanatory variables. The Central region uses total drainage area, the average hydraulic conductivity of the entire thickness of unconsolidated deposits, and the index of permeability and thickness of the Quaternary surficial sediments. The Southern region uses total drainage area and the percent of the basin covered by forest. The average standard error of prediction for these equations ranges from 39.3 to 66.7 percent.The regional regression equations are applicable only to stream sites with low flows unaffected by regulation and to stream sites with drainage basin characteristic values within specified limits. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features and for urbanized basins. Extrapolations near and beyond the applicable basin characteristic limits will have unknown errors that may be large. Equations are presented for use in estimating the 90-percent prediction interval of the low-flow statistics estimated by use of the regression equations at a given stream site.The regression equations are to be incorporated into the U.S. Geological Survey StreamStats Web-based application for Indiana. StreamStats allows users to select a stream site on a map and automatically measure the needed basin characteristics and compute the estimated low-flow statistics and associated prediction intervals.
Directory of Open Access Journals (Sweden)
S. T. Kessel
2013-01-01
Full Text Available Aerial survey provides an important tool to assess the abundance of both terrestrial and marine vertebrates. To date, limited work has tested the effectiveness of this technique to estimate the abundance of smaller shark species. In Bimini, Bahamas, the lemon shark (Negaprion brevirostris shows high site fidelity to a shallow sandy lagoon, providing an ideal test species to determine the effectiveness of localised aerial survey techniques for a Carcharhinid species in shallow subtropical waters. Between September 2007 and September 2008, visual surveys were conducted from light aircraft following defined transects ranging in length between 8.8 and 4.4 km. Count results were corrected for “availability”, “perception”, and “survey intensity” to provide unbiased abundance estimates. The abundance of lemon sharks was greatest in the central area of the lagoon during high tide, with a change in abundance distribution to the east and western regions of the lagoon with low tide. Mean abundance of sharks was estimated at 49 (±8.6 individuals, and monthly abundance was significantly positively correlated with mean water temperature. The successful implementation of the aerial survey technique highlighted the potential of further employment for shark abundance assessments in shallow coastal marine environments.
Rana, Md Masud
2017-01-01
This paper proposes an innovative internet of things (IoT) based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations.
Batzias, Dimitris F.
2012-12-01
In this work, we present an analytic estimation of recycled products added value in order to provide a means for determining the degree of recycling that maximizes profit, taking also into account the social interest by including the subsidy of the corresponding investment. A methodology has been developed based on Life Cycle Product (LCP) with emphasis on added values H, R as fractions of production and recycle cost, respectively (H, R >1, since profit is included), which decrease by the corresponding rates h, r in the recycle course, due to deterioration of quality. At macrolevel, the claim that "an increase of exergy price, as a result of available cheap energy sources becoming more scarce, leads to less recovered quantity of any recyclable material" is proved by means of the tradeoff between the partial benefits due to material saving and resources degradation/consumption (assessed in monetary terms).
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver L.
2016-01-01
Satellite constellations and Distributed Spacecraft Mission (DSM) architectures offer unique benefits to Earth observation scientists and unique challenges to cost estimators. The Cost and Risk (CR) module of the Tradespace Analysis Tool for Constellations (TAT-C) being developed by NASA Goddard seeks to address some of these challenges by providing a new approach to cost modeling, which aggregates existing Cost Estimating Relationships (CER) from respected sources, cost estimating best practices, and data from existing and proposed satellite designs. Cost estimation through this tool is approached from two perspectives: parametric cost estimating relationships and analogous cost estimation techniques. The dual approach utilized within the TAT-C CR module is intended to address prevailing concerns regarding early design stage cost estimates, and offer increased transparency and fidelity by offering two preliminary perspectives on mission cost. This work outlines the existing cost model, details assumptions built into the model, and explains what measures have been taken to address the particular challenges of constellation cost estimating. The risk estimation portion of the TAT-C CR module is still in development and will be presented in future work. The cost estimate produced by the CR module is not intended to be an exact mission valuation, but rather a comparative tool to assist in the exploration of the constellation design tradespace. Previous work has noted that estimating the cost of satellite constellations is difficult given that no comprehensive model for constellation cost estimation has yet been developed, and as such, quantitative assessment of multiple spacecraft missions has many remaining areas of uncertainty. By incorporating well-established CERs with preliminary approaches to approaching these uncertainties, the CR module offers more complete approach to constellation costing than has previously been available to mission architects or Earth
Directory of Open Access Journals (Sweden)
Mikko Niilo-Rämä
2014-06-01
Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.
Estimating Horizontal Displacement between DEMs by Means of Particle Image Velocimetry Techniques
Directory of Open Access Journals (Sweden)
Juan F. Reinoso
2015-12-01
Full Text Available To date, digital terrain model (DTM accuracy has been studied almost exclusively by computing its height variable. However, the largely ignored horizontal component bears a great influence on the positional accuracy of certain linear features, e.g., in hydrological features. In an effort to fill this gap, we propose a means of measurement different from the geomatic approach, involving fluid mechanics (water and air flows or aerodynamics. The particle image velocimetry (PIV algorithm is proposed as an estimator of horizontal differences between digital elevation models (DEM in grid format. After applying a scale factor to the displacement estimated by the PIV algorithm, the mean error predicted is around one-seventh of the cell size of the DEM with the greatest spatial resolution, and around one-nineteenth of the cell size of the DEM with the least spatial resolution. Our methodology allows all kinds of DTMs to be compared once they are transformed into DEM format, while also allowing comparison of data from diverse capture methods, i.e., LiDAR versus photogrammetric data sources.
Directory of Open Access Journals (Sweden)
Atta Ullah
2014-01-01
Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.
Park, Boyoung; Lee, Yeon-Kyeng; Cho, Lisa Y.; Go, Un Yeong; Yang, Jae Jeong; Ma, Seung Hyun; Choi, Bo-Youl; Lee, Moo-Sik; Lee, Jin-Seok; Choi, Eun Hwa; Lee, Hoan Jong
2011-01-01
This study compared interview and telephone surveys to select the better method for regularly estimating nationwide vaccination coverage rates in Korea. Interview surveys using multi-stage cluster sampling and telephone surveys using stratified random sampling were conducted. Nationwide coverage rates were estimated in subjects with vaccination cards in the interview survey. The interview survey relative to the telephone survey showed a higher response rate, lower missing rate, higher validity and a less difference in vaccination coverage rates between card owners and non-owners. Primary vaccination coverage rate was greater than 90% except for the fourth dose of DTaP (diphtheria/tetanus/pertussis), the third dose of polio, and the third dose of Japanese B encephalitis (JBE). The DTaP4: Polio3: MMR1 fully vaccination rate was 62.0% and BCG1:HepB3:DTaP4:Polio3:MMR1 was 59.5%. For age-appropriate vaccination, the coverage rate was 50%-80%. We concluded that the interview survey was better than the telephone survey. These results can be applied to countries with incomplete registry and decreasing rates of landline telephone coverage due to increased cell phone usage and countries. Among mandatory vaccines, efforts to increase vaccination rate for the fourth dose of DTaP, the third dose of polio, JBE and regular vaccinations at recommended periods should be conducted in Korea. PMID:21655054
Methods for Estimating Mean Annual Rate of Earthquakes in Moderate and Low Seismicity Regions~
Institute of Scientific and Technical Information of China (English)
Peng Yanju; Zhang Lifang; Lv Yuejun; Xie Zhuojuan
2012-01-01
Two kinds of methods for determining seismic parameters are presented, that is, the potential seismic source zoning method and grid-spatially smoothing method. The Gaussian smoothing method and the modified Gaussian smoothing method are described in detail, and a comprehensive analysis of the advantages and disadvantages of these methods is made. Then, we take centrai China as the study region, and use the Gaussian smoothing method and potential seismic source zoning method to build seismic models to calculate the mean annual seismic rate. Seismic hazard is calculated using the probabilistic seismic hazard analysis method to construct the ground motion acceleration zoning maps. The differences between the maps and these models are discussed and the causes are investigated. The results show that the spatial smoothing method is suitable for estimating the seismic hazard over the moderate and low seismicity regions or the hazard caused by background seismicity; while the potential seismic source zoning method is suitable for estimating the seismic hazard in well-defined seismotectonics. Combining the spatial smoothing method and the potential seismic source zoning method with an integrated account of the seismicity and known seismotectonics is a feasible approach to estimate the seismic hazard in moderate and low seismicity regions.
Cardiac motion estimation by using high-dimensional features and K-means clustering method
Oubel, Estanislao; Hero, Alfred O.; Frangi, Alejandro F.
2006-03-01
Tagged Magnetic Resonance Imaging (MRI) is currently the reference modality for myocardial motion and strain analysis. Mutual Information (MI) based non rigid registration has proven to be an accurate method to retrieve cardiac motion and overcome many drawbacks present on previous approaches. In a previous work1, we used Wavelet-based Attribute Vectors (WAVs) instead of pixel intensity to measure similarity between frames. Since the curse of dimensionality forbids the use of histograms to estimate MI of high dimensional features, k-Nearest Neighbors Graphs (kNNG) were applied to calculate α-MI. Results showed that cardiac motion estimation was feasible with that approach. In this paper, K-Means clustering method is applied to compute MI from the same set of WAVs. The proposed method was applied to four tagging MRI sequences, and the resulting displacements were compared with respect to manual measurements made by two observers. Results show that more accurate motion estimation is obtained with respect to the use of pixel intensity.
Horvitz-Thompson survey sample methods for estimating large-scale animal abundance
Samuel, M.D.; Garton, E.O.
1994-01-01
Large-scale surveys to estimate animal abundance can be useful for monitoring population status and trends, for measuring responses to management or environmental alterations, and for testing ecological hypotheses about abundance. However, large-scale surveys may be expensive and logistically complex. To ensure resources are not wasted on unattainable targets, the goals and uses of each survey should be specified carefully and alternative methods for addressing these objectives always should be considered. During survey design, the impoflance of each survey error component (spatial design, propofiion of detected animals, precision in detection) should be considered carefully to produce a complete statistically based survey. Failure to address these three survey components may produce population estimates that are inaccurate (biased low), have unrealistic precision (too precise) and do not satisfactorily meet the survey objectives. Optimum survey design requires trade-offs in these sources of error relative to the costs of sampling plots and detecting animals on plots, considerations that are specific to the spatial logistics and survey methods. The Horvitz-Thompson estimators provide a comprehensive framework for considering all three survey components during the design and analysis of large-scale wildlife surveys. Problems of spatial and temporal (especially survey to survey) heterogeneity in detection probabilities have received little consideration, but failure to account for heterogeneity produces biased population estimates. The goal of producing unbiased population estimates is in conflict with the increased variation from heterogeneous detection in the population estimate. One solution to this conflict is to use an MSE-based approach to achieve a balance between bias reduction and increased variation. Further research is needed to develop methods that address spatial heterogeneity in detection, evaluate the effects of temporal heterogeneity on survey
Improvement Schemes for Indoor Mobile Location Estimation: A Survey
Directory of Open Access Journals (Sweden)
Jianga Shang
2015-01-01
Full Text Available Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hidden Markov models. Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR, and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research.
DEFF Research Database (Denmark)
Kirchner, C.H.; Beyer, Jan
1999-01-01
A statistical sampling method is described to estimate the annual catch of silver kob Agryrosomus indorus by recreational shore-anglers in Namibia. The method is based on the theory of progressive counts and on-site roving interviews of anglers, with catch counts and measurements at interception......, using data taken during a survey from 1 October 1995 to 30 September 1996. Two different methods of estimating daily catch were tested by sampling the same population of anglers using a complete and an incomplete survey. The mean rate estimator, calculated by the ratio of the means with progressive...
A note on a difference-type estimator for population mean under two-phase sampling design.
Khan, Mursala; Al-Hossain, Abdullah Yahia
2016-01-01
In this manuscript, we have proposed a difference-type estimator for population mean under two-phase sampling scheme using two auxiliary variables. The properties and the mean square error of the proposed estimator are derived up to first order of approximation; we have also found some efficiency comparison conditions for the proposed estimator in comparison with the other existing estimators under which the proposed estimator performed better than the other relevant existing estimators. We show that the proposed estimator is more efficient than other available estimators under the two phase sampling scheme for this one example; however, further study is needed to establish the superiority of the proposed estimator for other populations.
U.S. Geological Survey, Department of the Interior — This data set represents the mean annual natural groundwater recharge, in millimeters, compiled for every catchment of NHDPlus for the conterminous United States....
Estimation of offshore humidity fluxes from sonic and mean temperature profile data
Foreman, R. J.; Emeis, S. M.
2009-09-01
A new simple method is employed to estimate the virtual potential temperature flux in marine conditions in the absence of any reliable hygrometry measurements. The estimate is made from a combination of sonic and cup anemometer measurements. Since the measurement of temperature by a sonic is humidity dependent, it overestimates the heat flux by a magnitude of 0.51?w?q?, where ? is the potential temperature in Kelvin and w?q? is the humidity flux. However, the quantity of interest for many applications is the virtual potential temperature flux w???v, which itself overestimates the heat flux by a magnitude of 0.61?w?q?. The virtual potential temperature flux is thus estimated by w-???v = w???s + 0.1?w?q?, (1) where w???s is the measured sonic anemometer heat flux. To properly estimate w?q?, fast response hygrometers are required, but in their absence, mean measurements can be used. While we have access to standard hygrometers, there are reasons to question the validity of results from these. Therefore, we propose that w???v be estimated by equating the stability parameter z?L, where z is the height and L the Obukhov length (which contains w???v and hence eq. (1)) with the bulk Richardson number and solving for w?q?, giving ( 3 --?? ) w-?q? = - 10 u*Rb-+ w-?-s . kzg ?v (2) Upon substituting eq. (2) into (1), and comparing terms on the right hand side of eq. (1), it is found that the contribution of the moisture term is an order of magnitude greater than that of the sonic measurement. This result is broadly consistent with previously published measurements, for example by Sempreviva and Gryning (1996) and Edson et al. (2004), of humidity fluxes using fast-response hygrometers in marine environments. We conclude that moisture effects are the chief determinant of instability in the marine surface layer. Consequently, the not unusual neglect of humidity effects in analytical and modelling efforts will result in a poor estimation of such quantities as the Obukhov length
Institute of Scientific and Technical Information of China (English)
HU Zhen-qi; HE Fen-qin; YIN Jian-zhong; LU Xia; TANG Shi-lu; WANG Lin-lin; LI Xiao-jing
2007-01-01
The objective of this paper is to improve the monitoring speed and precision of fractional vegetation cover (fc). It mainly focuses onfc estimation when fcmax andfcmin are not approximately equal to 100% and 0%, respectively due to using remote sensing image with medium or low spatial resolution. Meanwhile, we present a new method offc estimation based on a random set offc maximum and minimum values from digital camera (DC) survey data and a dimidiate pixel model. The results show that this is a convenient, efficient and accurate method forfc monitoring, with the maximum error -0.172 and correlation coefficient of 0.974 between DC survey data and the estimated value of the remote sensing model. The remaining DC survey data can be used as verification data for the precision of thefc estimation. In general, the estimation offc based on DC survey data and a remote sensing model is a brand-new development trend and deserves further extensive utilization.
Directory of Open Access Journals (Sweden)
Javid Shabbir
2012-01-01
Full Text Available In this paper we propose a combined exponential ratio type estimator of finite population mean utilizing information on the auxiliary attribute(s under non-response. Expressions for bias and MSE of the proposed estimator are derived up to first order of approximation. An empirical study is carried out to observe the performances of the estimators.
DEFF Research Database (Denmark)
Knudsen, Per; Andersen, Ole Baltazar
2012-01-01
The Gravity and Ocean Circulation Experiment - GOCE satellite mission measure the Earth gravity field with unprecedented accuracy leading to substantial improvements in the modelling of the ocean circulation and transport. In this study of the performance of GOCE, a newer gravity model have been...... have been improved significantly compared to results obtained using pre-GOCE gravity field models. The results of this study show that geostrophic surface currents associated with the mean circulation have been further improved and that currents having speeds down to 5 cm/s have been recovered....
Directory of Open Access Journals (Sweden)
Angela Shirley
2014-01-01
Full Text Available To achieve a more efficient use of auxiliary information we propose single-parameter ratio/product-cum-mean-per-unit estimators for a finite population mean in a simple random sample without replacement when the magnitude of the correlation coefficient is not very high (less than or equal to 0.7. The first order large sample approximation to the bias and the mean square error of our proposed estimators are obtained. We use simulation to compare our estimators with the well-known sample mean, ratio, and product estimators, as well as the classical linear regression estimator for efficient use of auxiliary information. The results are conforming to our motivating aim behind our proposition.
Dementia care and general physicians--a survey on prevalence, means, attitudes and recommendations.
Thyrian, Jochen René; Hoffmann, Wolfgang
2012-12-01
General physicians (GP) play a key role in providing appropriate care for people with dementia. It is important to understand their workload and opinions regarding areas for improvement. A group of 1,109 GPs working in Mecklenburg-Western Pomerania, Gemany (1.633 million inhabitants), were identified, contacted and asked to participate in a written survey. The survey addressed five main topics: (a) the GP, (b) the GP's practice, (c) the treatment of dementia, (d) personal views, attitudes and specific competences regarding dementia and (e) the GP's recommendations for improving dementia-related health care. The survey response rate was 31%. In total, the responding GPs estimated that they provided care to 12,587 patients with dementia every quarter year. The GPs also reported their opinions about screening instruments, treatment and recommendations for better care of dementia patients. Only 10% of them do not use screening instruments, one third felt competent in their care for patients with dementia and 54% opt for transfer of patients to a specialist for further neuropsychological testing. Four conclusions from this study are the following: (a) dementia care is a relevant and prevalent topic for GPs, (b) systematic screening instruments are widely used, but treatment is guided, mostly by clinical experience, (c) attitudes towards caring for people with dementia are positive, and (d) GPs recommend spending a lot more time with patients and caregivers and provision of better support in social participation. A majority of GPs recommend abolishing "Budgetierung", a healthcare budgeting system in the statutory health insurance programmes.
Southeast Region Headboat Survey-Trip Estimates by Type
National Oceanic and Atmospheric Administration, Department of Commerce — This is a summary of the number of trips by vessel/area/month/trip type through time and is a means of gauging headboat fishery effort and compliance through time.
Macek, Mark D; Manski, Richard J; Vargas, Clemencia M; Moeller, John
2002-01-01
Objective To compare estimates of dental visits among adults using three national surveys. Data Sources/Study Design Cross-sectional data from the National Health Interview Survey (NHIS), National Health and Nutrition Examination Survey (NHANES), and National Health Expenditure surveys (NMCES, NMES, MEPS). Study Design This secondary data analysis assessed whether overall estimates and stratum-specific trends are different across surveys. Data Collection Dental visit data are age standardized via the direct method to the 1990 population of the United States. Point estimates, standard errors, and test statistics are generated using SUDAAN. Principal Findings Sociodemographic, stratum-specific trends are generally consistent across surveys; however, overall estimates differ (NHANES III [364-day estimate] versus 1993 NHIS: –17.5 percent difference, Z=7.27, p value < 0.001; NHANES III [365-day estimate] vs. 1993 NHIS: 5.4 percent difference, Z=–2.50, p value=0.006; MEPS vs. 1993 NHIS: –29.8 percent difference, Z=16.71, p value < 0.001). MEPS is the least susceptible to intrusion, telescoping, and social desirability. Conclusions Possible explanations for discrepancies include different reference periods, lead-in statements, question format, and social desirability of responses. Choice of survey should depend on the hypothesis. If trends are necessary, choice of survey should not matter; however, if health status or expenditure associations are necessary, then surveys that contain these variables should be used, and if accurate overall estimates are necessary, then MEPS should be used. A validation study should be conducted to establish “true” utilization estimates. PMID:12036005
Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco
2014-06-11
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.
Directory of Open Access Journals (Sweden)
Nazelie Kassabian
2014-06-01
Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.
Mixed Estimation for a Forest Survey Sample Design
Francis A. Roesch
1999-01-01
Three methods of estimating the current state of forest attributes over small areas for the USDA Forest Service Southern Research Station's annual forest sampling design are compared. The three methods were (I) simple moving average, (II) single imputation of plot data that had been updated by externally developed models, and (III) local application of a global...
Mean temperature of the catch (MTC in the Greek Seas based on landings and survey data
Directory of Open Access Journals (Sweden)
Athanassios C. Tsikliras
2015-04-01
Full Text Available The mean temperature of the catch (MTC, which is the average inferred temperature preference of the exploited species weighted by their annual catch, is an index that has been used for evaluating the effect of sea warming on marine ecosystems. In the present work, we examined the effect of sea surface temperature on the catch composition of the Greek Seas using the MTC applied on the official catch statistics (landings for the period 1970-2010 (Aegean and Ionian Seas and on experimental bottom trawl survey data for 1997-2014 (southern Aegean Sea. The MTC of the landings for the study period increased from 11.8 οC to 16.2 οC in the Aegean Sea and from 10.0 οC to 14.7 οC in the Ionian Sea. Overall, the rate of MTC increase was 1.01 οC per decade for the Aegean and 1.17 οC per decade for the Ionian Sea and was positively related to sea surface temperature anomalies in both areas. For the survey data, the increase of the MTC of the bottom trawl catch in the southern Aegean Sea was lower (0.51 οC per decade but referred to a shorter time frame and included only demersal species. The change in MTC of official and survey catches indicates that the relative catch proportions of species preferring warmer waters and those preferring colder waters have changed in favour of the former and that this change is linked to sea surface temperature increase, both internally (through the Atlantic Multidecadal Oscillation or externally (warming trend driven.
Directory of Open Access Journals (Sweden)
Zhuo Qi Lee
Full Text Available Biased random walk has been studied extensively over the past decade especially in the transport and communication networks communities. The mean first passage time (MFPT of a biased random walk is an important performance indicator in those domains. While the fundamental matrix approach gives precise solution to MFPT, the computation is expensive and the solution lacks interpretability. Other approaches based on the Mean Field Theory relate MFPT to the node degree alone. However, nodes with the same degree may have very different local weight distribution, which may result in vastly different MFPT. We derive an approximate bound to the MFPT of biased random walk with short relaxation time on complex network where the biases are controlled by arbitrarily assigned node weights. We show that the MFPT of a node in this general case is closely related to not only its node degree, but also its local weight distribution. The MFPTs obtained from computer simulations also agree with the new theoretical analysis. Our result enables fast estimation of MFPT, which is useful especially to differentiate between nodes that have very different local node weight distribution even though they share the same node degrees.
Estimation of BDS DCB Combining GIM and Different Zero-mean Constraints
Directory of Open Access Journals (Sweden)
YAO Yibin
2017-02-01
Full Text Available As the limited number of the BeiDou Navigation Satellite System (BDS satellites and tracking stations currently, it's difficult to attain daily DCBs solution with precisely high accuracy based on BeiDou single system. In order to overcome the weakness above, two different zero-mean constraints for BDS satellites, called constraint one and constraint two, respectively, are used to estimate DCBs of BDS based on BeiDou observations from the multi-GNSS experiment (MGEX network and global ionosphere maps (GIM from the Center for Orbit Determination in Europe (CODE. The results show that the systematic difference of the overall trend under two different constraints is consistent, and the systematic difference of DCBC2I-C7I and DCBC2I-C6I is -3.3 ns and 1.2 ns, respectively. The systematic difference between BDS satellite DCBs and receiver DCBs has the same absolute value, but opposite signs instead. Compared to constraint one, The DCBs estimation of IGSO/MEO satellites under constraint two are more stable (the improvement of satellites DCBC2I-C7I and DCBC2I-C6I STD are up to 21%, 13%, respectively, the stability of IGSO and MEO satellites (STDs are within 0.1 ns, 0.2 ns, respectively is better than that of GEO satellites (STDs are 0.15~0.32 ns. DCB estimation of constraint one is not only consistent with the CAS/DLR products (Bias:-0.4~0.2 ns, but also takes into account the stability of BDS satellites DCB. Under the two different constraints, there is no obvious change in BDS receiver DCBs, meaning that the selection of constraints has no obvious influence on the stability of BDS receivers DCBs. The overall stability of BDS receiver DCBs is better than 1 ns. Due to the accuracy discrepancy of GIM in different latitudes, the stability of BDS receiver DCBs in the middle-high latitude (STDs are within 0.4 ns is better than that in low latitude region (STDs are 0.8~1 ns.
Hing, E; Poe, G; Euller, R
1999-01-01
Two large surveys on employer-sponsored health insurance produced different estimates of the percentage of employers offering insurance to their employees in 1993. These differences occurred despite major similarities in the surveys' purpose and design. In this paper, five survey design factors are assessed. Estimates from the second survey were recomputed to eliminate cases not included in the first survey. Survey estimates were no longer significantly different when cases were removed because establishments had moved, were single-employee establishments on the sample frame, were classified as completed only in the second survey, or when poststratification adjustments in the weighting used only in the second survey were eliminated. Based on a comparison of 449 cases that responded in both surveys, changes in the wording of questions also probably contributed to the difference in survey estimates. These results indicate that estimates from these types of surveys are very sensitive to differing designs.
DEFF Research Database (Denmark)
Henriksen, Lars Christian; Hansen, Morten Hartvig; Poulsen, Niels Kjølstad
2013-01-01
Model‐based state space controllers require knowledge of states, both measurable and unmeasurable, and state estimation algorithms are typically employed to obtain estimates of the unmeasurable states. For the control of wind turbines, a good estimate of the free mean wind speed is important...... for the closed‐loop dynamics of the system, and an appropriate level of modelling detail is required to obtain good estimates of the free mean wind speed. In this work, three aerodynamic models based on blade element momentum theory are presented and compared with the aero‐servo‐elastic code HAWC2. The first...... in the aero‐servo‐elastic code HAWC2 compare the ability to estimate the free mean wind speed when either the first or third model is included in the estimation algorithm. Both a simplified example with a deterministic step in wind speed and full degrees‐of‐freedom simulations with turbulent wind fields...
Pose Estimation for Augmented Reality: A Hands-On Survey
Marchand, Éric; Uchiyama, Hideaki; Spindler, Fabien
2016-01-01
International audience; Augmented reality (AR) allows to seamlessly insert virtual objects in an image sequence. In order to accomplish this goal, it is important that synthetic elements are rendered and aligned in the scene in an accurate and visually acceptable way. The solution of this problem can be related to a pose estimation or, equivalently, a camera localization process. This paper aims at presenting a brief but almost self-contented introduction to the most important approaches dedi...
What does it mean to manage sky survey data? A model to facilitate stakeholder conversations
Sands, Ashley E.; Darch, Peter T.
2016-06-01
Astronomy sky surveys, while of great scientific value independently, can be deployed even more effectively when multiple sources of data are combined. Integrating discrete datasets is a non-trivial exercise despite investments in standard data formats and tools. Creating and maintaining data and associated infrastructures requires investments in technology and expertise. Combining data from multiple sources necessitates a common understanding of data, structures, and goals amongst relevant stakeholders.We present a model of Astronomy Stakeholder Perspectives on Data. The model is based on 80 semi-structured interviews with astronomers, computational astronomers, computer scientists, and others involved in the building or use of the Sloan Digital Sky Survey (SDSS) and Large Synoptic Survey Telescope (LSST). Interviewees were selected to ensure a range of roles, institutional affiliations, career stages, and level of astronomy education. Interviewee explanations of data were analyzed to understand how perspectives on astronomy data varied by stakeholder.Interviewees described sky survey data either intrinsically or extrinsically. “Intrinsic” descriptions of data refer to data as an object in and of itself. Respondents with intrinsic perspectives view data management in one of three ways: (1) “Medium” - securing the zeros and ones from bit rot; (2) “Scale” - assuring that changes in state are documented; or (3) “Content” - ensuring the scientific validity of the images, spectra, and catalogs.“Extrinsic” definitions, in contrast, define data in relation to other forms of information. Respondents with extrinsic perspectives view data management in one of three ways: (1) “Source” - supporting the integrity of the instruments and documentation; (2) “Relationship” - retaining relationships between data and their analytical byproducts; or (3) “Use” - ensuring that data remain scientifically usable.This model shows how data management can
Oosterhaven, J.; Stelder, T.M.
2008-01-01
This paper evaluates a recently published semi-survey international input-output table for nine East-Asian countries and the USA with four non-survey estimation alternatives. A new generalized RAS procedure is used with stepwise increasing information from both import and export statistics as
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
A class of estimators of the mean survival time with interval censored data are studied by unbiased transformation method.The estimators are constructed based on the observations to ensure unbiasedness in the sense that the estimators in a certain class have the same expectation as the mean survival time.The estimators have good properties such as strong consistency (with the rate of O(n-1/2 (log log n)1/2)) and asymptotic normality.The application to linear regression is considered and the simulation reports are given.
Carrieri, M; Angelini, P; Venturelli, C; Maccagnani, B; Bellini, R
2012-03-01
Our study compared different estimates of adult mosquito abundance (Pupal Demographic Survey [PDS], Human Landing Collection [HLC], Number of Bites declared by Citizens during interviews [NBC]) to the mean number of eggs laid in ovitraps. We then calculated a disease risk threshold in terms of number of eggs per ovitrap above which an arbovirus epidemic may occur. The study was conducted during the summers of 2007 and 2008 in the Emilia-Romagna region of Italy where a chikungunya epidemic occurred in 2007. Ovitrap monitoring lasted from May to September, while adult sampling by means of PDS, HLC, and NBC was repeated three times each summer. Based on calculated rate of increase of the disease (R(0)) and the number of bites per human per day measured during the outbreak, we estimated that only 10.1% of the females transmitted the chikungunya virus in the principal focus. Under our conditions, we demonstrated that a positive correlation can be found between the females' density estimated by means of PDS, HLC, and NBC and the mean number of eggs in the ovitraps. We tested our hypothesis during the 2007 secondary outbreak of CHIKV in Cervia, and found that R(0) calculated based on the number of biting females estimated from the egg density was comparable to the R(0) calculated based on the progression of the human cases. The identification of an epidemic threshold based on the mean egg density may define the high risk areas and focus control programs.
Sidle, John G.; Augustine, David J.; Johnson, Douglas H.; Miller, Sterling D.; Cully, Jack F.; Reading, Richard P.
2012-01-01
Aerial surveys using line-intercept methods are one approach to estimate the extent of prairie dog colonies in a large geographic area. Although black-tailed prairie dogs (Cynomys ludovicianus) construct conspicuous mounds at burrow openings, aerial observers have difficulty discriminating between areas with burrows occupied by prairie dogs (colonies) versus areas of uninhabited burrows (uninhabited colony sites). Consequently, aerial line-intercept surveys may overestimate prairie dog colony extent unless adjusted by an on-the-ground inspection of a sample of intercepts. We compared aerial line-intercept surveys conducted over 2 National Grasslands in Colorado, USA, with independent ground-mapping of known black-tailed prairie dog colonies. Aerial line-intercepts adjusted by ground surveys using a single activity category adjustment overestimated colonies by ≥94% on the Comanche National Grassland and ≥58% on the Pawnee National Grassland. We present a ground-survey technique that involves 1) visiting on the ground a subset of aerial intercepts classified as occupied colonies plus a subset of intercepts classified as uninhabited colony sites, and 2) based on these ground observations, recording the proportion of each aerial intercept that intersects a colony and the proportion that intersects an uninhabited colony site. Where line-intercept techniques are applied to aerial surveys or remotely sensed imagery, this method can provide more accurate estimates of black-tailed prairie dog abundance and trends
Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong
2012-11-01
By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (˜0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (˜0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (˜102), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.
Hostetter, Nathan J.; Gardner, Beth; Schweitzer, Sara H.; Boettcher, Ruth; Wilke, Alexandra L.; Addison, Lindsay; Swilling, William R.; Pollock, Kenneth H.; Simons, Theodore R.
2015-01-01
The extensive breeding range of many shorebird species can make integration of survey data problematic at regional spatial scales. We evaluated the effectiveness of standardized repeated count surveys coordinated across 8 agencies to estimate the abundance of American Oystercatcher (Haematopus palliatus) breeding pairs in the southeastern United States. Breeding season surveys were conducted across coastal North Carolina (90 plots) and the Eastern Shore of Virginia (3 plots). Plots were visited on 1–5 occasions during April–June 2013. N-mixture models were used to estimate abundance and detection probability in relation to survey date, tide stage, plot size, and plot location (coastal bay vs. barrier island). The estimated abundance of oystercatchers in the surveyed area was 1,048 individuals (95% credible interval: 851–1,408) and 470 pairs (384–637), substantially higher than estimates that did not account for detection probability (maximum counts of 674 individuals and 316 pairs). Detection probability was influenced by a quadratic function of survey date, and increased from mid-April (~0.60) to mid-May (~0.80), then remained relatively constant through June. Detection probability was also higher during high tide than during low, rising, or falling tides. Abundance estimates from N-mixture models were validated at 13 plots by exhaustive productivity studies (2–5 surveys wk−1). Intensive productivity studies identified 78 breeding pairs across 13 productivity plots while the N-mixture model abundance estimate was 74 pairs (62–119) using only 1–5 replicated surveys season−1. Our results indicate that standardized replicated count surveys coordinated across multiple agencies and conducted during a relatively short time window (closure assumption) provide tremendous potential to meet both agency-level (e.g., state) and regional-level (e.g., flyway) objectives in large-scale shorebird monitoring programs.
Levin, Bruce; Leu, Cheng-Shiun
2013-01-01
We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.
Smoking in the workplace 1986: Labour Force Survey estimates.
Millar, W J; Bisch, L M
1989-01-01
A smoking supplement on the December 1986 Canadian Labour Force Survey (LFS) obtained data on smoking rates within occupational groups, the percentage of workers in occupations which permit smoking at the worksite, the proportion of workers with designated smoking areas at their place of employment, and worker attitudes towards restriction of smoking. Smoking prevalence ranges from 18% among professional workers to 42% among transportation workers. Smoking rates are also high in mining (40%), construction (39%), and other craft occupations (37%). About 53% of the working population state that smoking is permitted in their immediate work area. Proportions of workers who indicate that smoking is permitted range from 39% among professional workers to 67% among transportation workers. Managerial (66%) and construction employees (65%) are also likely to state that smoking is permitted in their work area. Only 40% of the working population report that there are designated smoking areas at their place of work. Professionals (55%) and mining workers (52%) are most likely to have designated smoking areas. Workers in outdoor (17%), construction (23%), and transportation occupations (26%) are least likely. A large percentage (81%) of the working population favour smoking restrictions. Support for restricting smoking is closely linked to smoking prevalence within an occupational group. About 65% of smokers favour restrictions. The degree of support among smokers for restrictions on smoking in the workplace suggests that many smokers desire environmental constraints on their smoking behaviour.
Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall
Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate
2016-11-01
The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.
Brus, D.J.; Gruijter, de J.J.
2003-01-01
In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be
Takahasi, K; Ishida, S; Nagasaka, K; Kurokawa, M; Asakawa, S
1975-04-01
A table was constructed for use in estimating the mean of distribution of logarithms of titers based on data obtained with a pooled material instead of those with individuals in a sample. A table of standard errors of the estimator was also constructed. Examples showing the utility and applicability of the tables were presented. Several relating problems were discussed.
Brus, D.J.; Gruijter, de J.J.
2003-01-01
In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be in
Generalized estimators of avian abundance from count survey data
Directory of Open Access Journals (Sweden)
Royle, J. A.
2004-01-01
Full Text Available I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture-recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.
Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey
Directory of Open Access Journals (Sweden)
Abdelrahman Osman Elfaki
2014-01-01
Full Text Available Cost estimation is the most important preliminary process in any construction project. Therefore, construction cost estimation has the lion’s share of the research effort in construction management. In this paper, we have analysed and studied proposals for construction cost estimation for the last 10 years. To implement this survey, we have proposed and applied a methodology that consists of two parts. The first part concerns data collection, for which we have chosen special journals as sources for the surveyed proposals. The second part concerns the analysis of the proposals. To analyse each proposal, the following four questions have been set. Which intelligent technique is used? How have data been collected? How are the results validated? And which construction cost estimation factors have been used? From the results of this survey, two main contributions have been produced. The first contribution is the defining of the research gap in this area, which has not been fully covered by previous proposals of construction cost estimation. The second contribution of this survey is the proposal and highlighting of future directions for forthcoming proposals, aimed ultimately at finding the optimal construction cost estimation. Moreover, we consider the second part of our methodology as one of our contributions in this paper. This methodology has been proposed as a standard benchmark for construction cost estimation proposals.
Xu, Tianhua
2016-01-01
The theoretical analysis of the one-tap normalized least-mean-square carrier phase estimation (CPE) is carried out in long-haul high speed coherent optical fiber communication systems. It is found that the one-tap normalized least-mean-square equalizer shows a similar performance compared to the traditional differential detection in the carrier phase recovery.
US Fish and Wildlife Service, Department of the Interior — Our moose population estimate for the surveyed area was calculated to be 1,864 ±1911. This included a low stratum estimate of 241, a medium stratum estimate of...
Estimating cosmological parameters from future gravitational lens surveys
Dobke, Benjamin M; Fassnacht, Christopher D; Auger, Matthew W
2009-01-01
Upcoming ground and space based observatories such as the DES, the LSST, the JDEM concepts and the SKA, promise to dramatically increase the size of strong gravitational lens samples. A significant fraction of the systems are expected to be time delay lenses. Many of the existing lensing degeneracies become less of an issue with large samples since the distributions of a number of parameters are predictable, and can be incorporated into an analysis, thus helping to lessen the degeneracy. Assuming a mean galaxy density profile that does not evolve with redshift, a Lambda-CDM cosmology, and Gaussian distributions for bulk parameters describing the lens and source populations, we generate synthetic lens catalogues and examine the relationship between constraints on the Omega_m - Omega_Lambda plane and H_0 with increasing lens sample size. We find that, with sample sizes of ~400 time delay lenses, useful constraints can be obtained for Omega_m and Omega_Lambda with approximately similar levels of precision as fro...
2015-03-26
ESTIMATING SINGLE AND MULTIPLE TARGET LOCATIONS USING K-MEANS CLUSTERING WITH RADIO TOMOGRAPHIC IMAGING IN WIRELESS SENSOR NETWORKS THESIS Jeffrey K...AND MULTIPLE TARGET LOCATIONS USING K-MEANS CLUSTERING WITH RADIO TOMOGRAPHIC IMAGING IN WIRELESS SENSOR NETWORKS THESIS Presented to the Faculty...SINGLE AND MULTIPLE TARGET LOCATIONS USING K-MEANS CLUSTERING WITH RADIO TOMOGRAPHIC IMAGING IN WIRELESS SENSOR NETWORKS Jeffrey K. Nishida, B.S.E.E
Baker, Charles
2012-01-01
One method available to prove the Schauder estimates is Neil Trudinger's method of mollification. In the case of second order elliptic equations, the method requires little more than mollification and the solid mean value inequality for subharmonic functions. Our goal in this article is show how the mean value property of subsolutions of the heat equation can be used in a similar fashion as the solid mean value inequality for subharmonic functions in Trudinger's original elliptic treatment, providing a relatively simple derivation of the interior Schauder estimate for second order parabolic equations.
Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections.
Fisher, Jason T; Heim, Nicole; Code, Sandra; Paczkowski, John
2016-01-01
Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT). Grizzly bears (Ursus arctos), for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears' range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error-arising when a visiting bear fails to leave a hair sample-has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012) and 76 (2013) sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation-which form the crux of management plans-require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species management and
Energy Technology Data Exchange (ETDEWEB)
Gomez, J.D. [Universidad Autonoma Chapingo, Chapingo (Mexico)]. E-mail: dgomez@correo.chapingo.mx; Etchevers, J.D. [Instituto de Recursos Naturales, Colegio de Postgraduados, Montecillo, Edo. de Mexico (Mexico); Monterroso, A.I. [departamento de Suelos, Universidad Autonoma Chapingo, Chapingo (Mexico); Gay, G. [Centro de Ciencias de la Atmosfera, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Campo, J. [Instituto de Ecologia, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Martinez, M. [Instituto de Recursos Naturales, Montecillo, Edo. de Mexico (Mexico)
2008-01-15
In regions of complex relief and scarce meteorological information it becomes difficult to implement techniques and models of numerical interpolation to elaborate reliable maps of climatic variables essential for the study of natural resources using the new tools of the geographic information systems. This paper presents a method for estimating annual and monthly mean values of temperature and precipitation, taking elements from simple interpolation methods and complementing them with some characteristics of more sophisticated methods. To determine temperature, simple linear regression equations were generated associating temperature with altitude of weather stations in the study region, which had been previously subdivided in accordance with humidity conditions and then applying such equations to the area's digital elevation model to obtain temperatures. The estimation of precipitation was based on the graphic method through the analysis of the meteorological systems that affect the regions of the study area throughout the year and considering the influence of mountain ridges on the movement of prevailing winds. Weather stations with data in nearby regions were analyzed according to their position in the landscape, exposure to humid winds, and false color associated with vegetation types. Weather station sites were used to reference the amount of rainfall; interpolation was attained using analogies with satellite images of false color to which a model of digital elevation was incorporated to find similar conditions within the study area. [Spanish] En las regiones de relieve complejo y con escasa informacion meteorologica se dificulta la aplicacion de las diferentes tecnicas y modelos de interpolacion numericos para elaborar mapas de variables climaticas confiables, indispensables para realizar estudios de los recursos naturales, con la utilizacion de las nuevas herramientas de los sistemas de informacion geografica. En este trabajo se presenta un metodo para
Dünki, Rudolf M.
2000-11-01
Limited predictability is one of the remarkable features of deterministic chaos and this feature may be quantized in terms of Lyapunov exponents. Accordingly, Lyapunov-exponent estimates may be expected to follow in a natural way from forecast algorithms. Exploring this idea, we propose a method estimating the largest Lyapunov exponent from a time series which uses the behavior of so-called simplex forecasts. The method considers the estimation of properties of the distribution of local simplex expansion coefficients. These are also used for the definition of error bars for the Lyapunov-exponent estimates and allows for selective forecasts with improved prediction accuracy. We demonstrate these concepts on standard test examples and three realistic applications to time series concerning largest Lyapunov-exponent estimation of an experimentally obtained hyperchaotic NMR signal, brain state differentiation, and stock-market prediction.
Deceuster, John; Etienne, Adélaïde; Robert, Tanguy; Nguyen, Frédéric; Kaufmann, Olivier
2014-04-01
Several techniques are available to estimate the depth of investigation or to identify possible artifacts in dc resistivity surveys. Commonly, the depth of investigation (DOI) is mainly estimated by using an arbitrarily chosen cut-off value on a selected indicator (resolution, sensitivity or DOI index). Ranges of cut-off values are recommended in the literature for the different indicators. However, small changes in threshold values may induce strong variations in the estimated depths of investigation. To overcome this problem, we developed a new statistical method to estimate the DOI of dc resistivity surveys based on a modified DOI index approach. This method is composed of 5 successive steps. First, two inversions are performed by using different resistivity reference models for the inversion (0.1 and 10 times the arithmetic mean of the logarithm of the observed apparent resistivity values). Inversion models are extended to the edges of the survey line and to a depth range of three times the pseudodepth of investigation of the largest array spacing used. In step 2, we compute the histogram of a newly defined scaled DOI index. Step 3 consists of the fitting of the mixture of two Gaussian distributions (G1 and G2) to the cumulative distribution function of the scaled DOI index values. Based on this fitting, step 4 focuses on the computation of an interpretation index (II) defined for every cell j of the model as the relative probability density that the cell j belongs to G1, which describes the Gaussian distribution of the cells with a scaled DOI index close to 0.0. In step 5, a new inversion is performed by using a third resistivity reference model (the arithmetic mean of the logarithm of the observed apparent resistivity values). The final electrical resistivity image is produced by using II as alpha blending values allowing the visual discrimination between well-constrained areas and poorly-constrained cells.
Should total landings be used to correct estimated catch in numbers or mean-weight-at-age?
DEFF Research Database (Denmark)
Lewy, Peter; Lassen, H.
1997-01-01
Many ICES fish stock assessment working groups have practised Sum Of Products, SOP, correction. This correction stems from a comparison of total weights of the known landings and the SOP over age of catch in number and mean weight-at-age, which ideally should be identical. In case of SOP...... discrepancies some countries correct catch in numbers while others correct mean weight-at-age by a common factor, the ratio between landing and SOP. The paper shows that for three sampling schemes the SOP corrections are statistically incorrect and should not be made since the SOP is an unbiased estimate...... of the total landings. Calculation of the bias of estimated catch in numbers and mean weight-at-age shows that SOP corrections of either of these estimates may increase the bias. Furthermore, for five demersal and one pelagic North Sea species it is shown that SOP discrepancies greater than 2% from...
Altyntsev, M. A.; Arbuzov, S. A.; Popov, R. A.; Tsoi, G. V.; Gromov, M. O.
2016-06-01
A dense digital surface model is one of the products generated by using UAV aerial survey data. Today more and more specialized software are supplied with modules for generating such kind of models. The procedure for dense digital model generation can be completely or partly automated. Due to the lack of reliable criterion of accuracy estimation it is rather complicated to judge the generation validity of such models. One of such criterion can be mobile laser scanning data as a source for the detailed accuracy estimation of the dense digital surface model generation. These data may be also used to estimate the accuracy of digital orthophoto plans created by using UAV aerial survey data. The results of accuracy estimation for both kinds of products are presented in the paper.
Feinglass, Joe; Nelson, Cynthia; Lawther, Timothy; Chang, Rowland W.
2003-01-01
OBJECTIVES: Alternative definitions of arthritis in community surveys provide very different estimates of arthritis prevalence among older Americans. This telephone interview study examines prevalence estimates based on the current Behavioral Risk Factor Surveillance System (BRFSS) arthritis case definition. METHODS: Interviews were conducted with 851 Chicago residents age 45 and older. Logistic regression was used to compare the age and sex controlled prevalence of poor health, restricted ac...
Most likely paths to error when estimating the mean of a reflected random walk
Duffy, Ken R
2009-01-01
It is known that simulation of the mean position of a reflected random walk $\\{W_n\\}$ exhibits non-standard behavior, even for light-tailed increment distributions with negative drift. The Large Deviation Principle (LDP) holds for deviations below the mean, but for deviations at the usual speed above the mean the rate function is null. This paper takes a deeper look at this phenomenon. Conditional on a large sample mean, a complete sample path LDP analysis is obtained. Let $I$ denote the rate function for the one dimensional increment process. If $I$ is coercive, then given a large simulated mean position, under general conditions our results imply that the most likely asymptotic behavior, $\\psi$, of the paths $n^{-1} W_{\\lfloor tn\\rfloor}$ is to be zero apart from on an interval $[T_0,T_1]\\subset[0,1]$ and to satisfy the functional equation \
Smith, S J; Perry, R I; Fanning, L P
1991-01-01
The Canadian Department of Fisheries and Oceans conducts annual bottom trawl surveys to monitor changes in the abundance of the major commercially important groundfish populations. Some of these surveys have been in operation for almost 20 yr. The estimates from these surveys often indicate rapid changes in abundance over time beyond that expected from the population dynamics of the fish. Much of this interannual change has been interpreted as variation, the magnitude of which has often made it difficult to measure anything but the most severe effects of fishing, pollution or any other intervention on the population. Recent studies have shown that some of this variation may be attributed to changes in catchability of fish due to the effects of environmental variables on fish distribution. Annual changes in abundance as estimated from such field surveys may be confounded by changes in catchability due to annual changes in environmental conditions. In this study, trawl catches of age 4 Atlantic cod (Gadus morhua) from surveys conducted during March 1979-1988 were compared with concurrent measurements of bottom salinity, temperature and depth. Large catches of age 4 cod are more likely to occur in water characterized as the intermediate cold layer defined by salinities of 32-33.5 and temperatures<5°C. This relationship also appears to be modified by depth. We further show that internnual changes in the estimated abundance from the surveys were, in a number of cases, coincident with changes in the proportion of the bottom water composed of the intermediate cold water layer. The implications that these patterns may have on interpreting trends in the estimates of abundance from trawl surveys are discussed.
Sanford, Ward E.; Nelms, David L.; Pope, Jason P.; Selnick, David L.
2015-01-01
Mean long-term hydrologic budget components, such as recharge and base flow, are often difficult to estimate because they can vary substantially in space and time. Mean long-term fluxes were calculated in this study for precipitation, surface runoff, infiltration, total evapotranspiration (ET), riparian ET, recharge, base flow (or groundwater discharge) and net total outflow using long-term estimates of mean ET and precipitation and the assumption that the relative change in storage over that 30-year period is small compared to the total ET or precipitation. Fluxes of these components were first estimated on a number of real-time-gaged watersheds across Virginia. Specific conductance was used to distinguish and separate surface runoff from base flow. Specific-conductance (SC) data were collected every 15 minutes at 75 real-time gages for approximately 18 months between March 2007 and August 2008. Precipitation was estimated for 1971-2000 using PRISM climate data. Precipitation and temperature from the PRISM data were used to develop a regression-based relation to estimate total ET. The proportion of watershed precipitation that becomes surface runoff was related to physiographic province and rock type in a runoff regression equation. A new approach to estimate riparian ET using seasonal SC data gave results consistent with those from other methods. Component flux estimates from the watersheds were transferred to flux estimates for counties and independent cities using the ET and runoff regression equations. Only 48 of the 75 watersheds yielded sufficient data, and data from these 48 were used in the final runoff regression equation. Final results for the study are presented as component flux estimates for all counties and independent cities in Virginia. The method has the potential to be applied in many other states in the U.S. or in other regions or countries of the world where climate and stream flow data are plentiful.
Energy Technology Data Exchange (ETDEWEB)
Amini, Nina H. [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); CNRS, Laboratoire des Signaux et Systemes (L2S) CentraleSupelec, Gif-sur-Yvette (France); Miao, Zibo; Pan, Yu; James, Matthew R. [Australian National University, ARC Centre for Quantum Computation and Communication Technology, Research School of Engineering, Canberra, ACT (Australia); Mabuchi, Hideo [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States)
2015-12-15
The purpose of this paper is to study the problem of generalizing the Belavkin-Kalman filter to the case where the classical measurement signal is replaced by a fully quantum non-commutative output signal. We formulate a least mean squares estimation problem that involves a non-commutative system as the filter processing the non-commutative output signal. We solve this estimation problem within the framework of non-commutative probability. Also, we find the necessary and sufficient conditions which make these non-commutative estimators physically realizable. These conditions are restrictive in practice. (orig.)
Estimation of a multivariate normal mean with a bounded signal to noise ratio
Kortbi, Othmane
2012-01-01
For normal canonical models with $X \\sim N_p(\\theta, \\sigma^{2} I_{p}), \\;\\; S^{2} \\sim \\sigma^{2}\\chi^{2}_{k}, \\;{independent}$, we consider the problem of estimating $\\theta$ under scale invariant squared error loss $\\frac{\\|d-\\theta \\|^{2}}{\\sigma^{2}}$, when it is known that the signal-to-noise ratio $\\frac{\\|\\theta\\|}{\\sigma}$ is bounded above by $m$. Risk analysis is achieved by making use of a conditional risk decomposition and we obtain in particular sufficient conditions for an estimator to dominate either the unbiased estimator $\\delta_{UB}(X)=X$, or the maximum likelihood estimator $\\delta_{\\hbox{mle}}(X,S^2)$, or both of these benchmark procedures. The given developments bring into play the pivotal role of the boundary Bayes estimator $\\delta_{BU}$ associated with a prior on $(\\theta,\\sigma)$ such that $\\theta|\\sigma$ is uniformly distributed on the (boundary) sphere of radius $m$ and a non-informative $\\frac{1}{\\sigma}$ prior measure is placed marginally on $\\sigma$. With a series of technical re...
Brus, D J; de Gruijter, J J
2003-04-01
In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be increased by interpolating the values at the nonprobability sample points to the probability sample points, and using these interpolated values as an auxiliary variable in the difference or regression estimator. These estimators are (approximately) unbiased, even when the nonprobability sample is severely biased such as in preferential samples. The gain in precision compared to the pi estimator in combination with Simple Random Sampling is controlled by the correlation between the target variable and interpolated variable. This correlation is determined by the size (density) and spatial coverage of the nonprobability sample, and the spatial continuity of the target variable. In a case study the average ratio of the variances of the simple regression estimator and pi estimator was 0.68 for preferential samples of size 150 with moderate spatial clustering, and 0.80 for preferential samples of similar size with strong spatial clustering. In the latter case the simple regression estimator was substantially more precise than the simple difference estimator.
Bias in little owl population estimates using playback techniques during surveys
Directory of Open Access Journals (Sweden)
Zuberogoitia, I.
2011-12-01
Full Text Available To test the efficiency of playback methods to survey little owl (Athene noctua populations we carried out two studies: (1 we recorded the replies of radio–tagged little owls to calls in a small area; (2 we recorded call broadcasts to estimate the effectiveness of the method to detect the presence of little owl. In the first study, we detected an average of 8.12 owls in the 30′ survey period, a number that is close to the real population; we also detected significant little owl movements from the initial location (before the playback to the next locations during the survey period. However, we only detected an average of 2.25 and 5.37 little owls in the first 5′ and 10′, respectively, of the survey time. In the second study, we detected 137 little owl territories in 105 positive sample units. The occupation rate was 0.35, the estimated occupancy was 0.393, and the probability of detection was 0.439. The estimated cumulative probability of detection suggests that a minimum of four sampling times would be needed in an extensive survey to detect 95% of the areas occupied by little owls.
Combining Propensity Score Methods and Complex Survey Data to Estimate Population Treatment Effects
Stuart, Elizabeth A.; Dong, Nianbo; Lenis, David
2016-01-01
Complex surveys are often used to estimate causal effects regarding the effects of interventions or exposures of interest. Propensity scores (Rosenbaum & Rubin, 1983) have emerged as one popular and effective tool for causal inference in non-experimental studies, as they can help ensure that groups being compared are similar with respect to a…
Chaimowicz, F. (Flávio); A. Burdorf (Alex)
2015-01-01
textabstractBackground: The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia
DEFF Research Database (Denmark)
Isanaka, Sheila; Boundy, Ellen O neal; Grais, Rebecca F
2016-01-01
Severe acute malnutrition (SAM) is reported to affect 19 million children worldwide. However, this estimate is based on prevalence data from cross-sectional surveys and can be expected to miss some children affected by an acute condition such as SAM. The burden of acute conditions is more...
Survey non-response in the Netherlands : Effects on prevalence estimates and associations
Van Loon, AJM; Tijhuis, M; Picavet, HSJ; Surtees, PG; Ormel, J
2003-01-01
PURPOSE: Differences in respondent characteristics may lead to bias in prevalence estimates and bias in associations. Both forms of non-response bias are investigated in a study on psychosocial factors and cancer risk, which is a sub-study of a large-scale monitoring survey in the Netherlands. METHO
Khaemba, W.; Stein, A.
2002-01-01
Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like
Khaemba, W.; Stein, A.
2002-01-01
Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like
Can i just check...? Effects of edit check questions on measurement error and survey estimates
Lugtig, Peter; Jäckle, Annette
2014-01-01
Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to
Barker, C.E.; Pawlewicz, M.J.
1993-01-01
In coal samples, published recommendations based on statistical methods suggest 100 measurements are needed to estimate the mean random vitrinite reflectance (Rv-r) to within ??2%. Our survey of published thermal maturation studies indicates that those using dispersed organic matter (DOM) mostly have an objective of acquiring 50 reflectance measurements. This smaller objective size in DOM versus that for coal samples poses a statistical contradiction because the standard deviations of DOM reflectance distributions are typically larger indicating a greater sample size is needed to accurately estimate Rv-r in DOM. However, in studies of thermal maturation using DOM, even 50 measurements can be an unrealistic requirement given the small amount of vitrinite often found in such samples. Furthermore, there is generally a reduced need for assuring precision like that needed for coal applications. Therefore, a key question in thermal maturation studies using DOM is how many measurements of Rv-r are needed to adequately estimate the mean. Our empirical approach to this problem is to compute the reflectance distribution statistics: mean, standard deviation, skewness, and kurtosis in increments of 10 measurements. This study compares these intermediate computations of Rv-r statistics with a final one computed using all measurements for that sample. Vitrinite reflectance was measured on mudstone and sandstone samples taken from borehole M-25 in the Cerro Prieto, Mexico geothermal system which was selected because the rocks have a wide range of thermal maturation and a comparable humic DOM with depth. The results of this study suggest that after only 20-30 measurements the mean Rv-r is generally known to within 5% and always to within 12% of the mean Rv-r calculated using all of the measured particles. Thus, even in the worst case, the precision after measuring only 20-30 particles is in good agreement with the general precision of one decimal place recommended for mean Rv
Air Kerma Rate estimation by means of in-situ gamma spectrometry: a Bayesian approach.
Cabal, Gonzalo; Kluson, Jaroslav
2010-01-01
Bayesian inference is used to determine the Air Kerma Rate based on in-situ gamma spectrum measurement performed with an NaI(Tl) scintillation detector. The procedure accounts for uncertainties in the measurement and in the mass energy transfer coefficients needed for the calculation. The WinBUGS program (Spiegelhalter et al., 1999) was used. The results show that the relative uncertainties in the Air Kerma estimate are of about 1%, and that the choice of unfolding procedure may lead to an estimate systematic error of 3%.
Air Kerma Rate estimation by means of in-situ gamma spectrometry: A Bayesian approach
Energy Technology Data Exchange (ETDEWEB)
Cabal, Gonzalo [Department of Dosimetry and Applications of Ionizing Radiation, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Brehova 7, 115 19 Prague 1 (Czech Republic); Department of Radiation Dosimetry, Nuclear Physics Institute, Academy of Sciences of the Czech Republic, Na Truhlarce 39/64, 180 86 Prague 8 (Czech Republic)], E-mail: cabal@ujf.cas.cz; Kluson, Jaroslav [Department of Dosimetry and Applications of Ionizing Radiation, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Brehova 7, 115 19 Prague 1 (Czech Republic)
2010-04-15
Bayesian inference is used to determine the Air Kerma Rate based on in-situ gamma spectrum measurement performed with an NaI(Tl) scintillation detector. The procedure accounts for uncertainties in the measurement and in the mass energy transfer coefficients needed for the calculation. The WinBUGS program () was used. The results show that the relative uncertainties in the Air Kerma estimate are of about 1%, and that the choice of unfolding procedure may lead to an estimate systematic error of 3%.
Jha, Raghbendra; Gaiha, Raghav; Sharma, Anurag
2010-01-01
This article reports on mean consumption, poverty (all three FGT measures) and inequality during 2004 for rural India using National Sample Survey (NSS) data for the 60th Round. Mean consumption at the national level is much higher than the poverty line. However, the Gini coefficient is higher than in recent earlier rounds. The headcount ratio is 22.9 per cent. Mean consumption, all three measures of poverty and the Gini coefficient are computed at the level of 20 states and 63 agro-climatic zones in these 20 states. It is surmised that despite impressive growth rates deprivation is pervasive, pockets of severe poverty persist, and inequality is rampant.
Quarterly Regional GDP Flash Estimates by Means of Benchmarking and Chain Linking
Directory of Open Access Journals (Sweden)
Cuevas Ángel
2015-12-01
Full Text Available In this article we propose a methodology for estimating the GDP of a country’s different regions, providing quarterly profiles for the annual official observed data. Thus the article offers a new instrument for short-term monitoring that allows the analysts to quantify the degree of synchronicity among regional business cycles. Technically, we combine time-series models with benchmarking methods to process short-term quarterly indicators and to estimate quarterly regional GDPs ensuring their temporal and transversal consistency with the National Accounts data. The methodology addresses the issue of nonadditivity, explicitly taking into account the transversal constraints imposed by the chain-linked volume indexes used by the National Accounts, and provides an efficient combination of structural as well as short-term information. The methodology is illustrated by an application to the Spanish economy, providing real-time quarterly GDP estimates, that is, with a minimum compilation delay with respect to the national quarterly GDP. The estimated quarterly data are used to assess the existence of cycles shared among the Spanish regions.
Liu, Z; Liu, C; He, B
2006-01-01
This paper presents a novel electrocardiographic inverse approach for imaging the 3-D ventricular activation sequence based on the modeling and estimation of the equivalent current density throughout the entire myocardial volume. The spatio-temporal coherence of the ventricular excitation process is utilized to derive the activation time from the estimated time course of the equivalent current density. At each time instant during the period of ventricular activation, the distributed equivalent current density is noninvasively estimated from body surface potential maps (BSPM) using a weighted minimum norm approach with a spatio-temporal regularization strategy based on the singular value decomposition of the BSPMs. The activation time at any given location within the ventricular myocardium is determined as the time point with the maximum local current density estimate. Computer simulation has been performed to evaluate the capability of this approach to image the 3-D ventricular activation sequence initiated from a single pacing site in a physiologically realistic cellular automaton heart model. The simulation results demonstrate that the simulated "true" activation sequence can be accurately reconstructed with an average correlation coefficient of 0.90, relative error of 0.19, and the origin of ventricular excitation can be localized with an average localization error of 5.5 mm for 12 different pacing sites distributed throughout the ventricles.
Energy Technology Data Exchange (ETDEWEB)
Burke, TImothy P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martin, William R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Error threshold estimation by means of the [[7,1,3
Salas, P J; Salas, Pedro J.; Sanz, Angel L.
2004-01-01
The states needed in a quantum computation are extremely affected by decoherence. Several methods have been proposed to control error spreading. They use two main tools: fault-tolerant constructions and concatenated quantum error correcting codes. In this work, we estimate the threshold conditions necessary to make a long enough quantum computation. The [[7,1,3
DEFF Research Database (Denmark)
Borg, R.; Nerup, J.; Nathan, D.M.
2009-01-01
concentrations in mmol/l. The ADAG study determined the relationship between HbA 1c and average glucose concentration (AG) and concluded that for most patients with diabetes, HbA 1c can - with reasonable precision - be expressed as an estimated AG in the same units as self-monitoring Udgivelsesdato: 2009/11/2...
Methods for estimating the occurrence of polypharmacy by means of a prescription database
DEFF Research Database (Denmark)
Bjerrum, L; Rosholm, J U; Hallas, J
1997-01-01
OBJECTIVE: Concurrent use of multiple drugs (polypharmacy, PP) may cause health risks such as adverse drug reactions, medication errors and poor compliance. The objective of this study, based on data from a prescription database, was to evaluate estimators of PP in the general population. METHODS...
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices
Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling
2008-01-01
The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...
Lectures on convexity estimates for mean curvature flow%中曲率流的凸性估计
Institute of Scientific and Technical Information of China (English)
倪磊
2009-01-01
The mean curvature flow is a very important geometric evolution equation. This article is an expository account on an important result by Huisken and Sinestrari on the convexity estimates of mean curvature flow on mean convex hypersurfaces.%中曲率流是非常重要的几何发展方程.给出Huisken和Sinestrasi关于中凸超曲面上中曲率流的凸性估计的一个重要结果的评注.
Efficient estimation of time-mean states of ocean models using 4D-Var and implicit time-stepping
Terwisscha van Scheltinga, A.D.; Dijkstra, H.A.
2007-01-01
We propose an efficient method for estimating a time-mean state of an ocean model subject to given observations using implicit time-stepping. The new method uses (i) an implicit implementation of the 4D-Var method to fit the model trajectory to the observations, and (ii) a preprocessor which applies
Brus, D.J.; Gruijter, de J.J.
2012-01-01
This paper launches a hybrid sampling approach, entailing a design-based approach in space followed by a model-based approach in time, for estimating temporal trends of spatial means or totals. The underlying space–time process that generated the soil data is only partly described, viz. by a linear
Somershoe, S.G.; Twedt, D.J.; Reid, B.
2006-01-01
We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.
Gotvald, Anthony J.
2017-01-13
The U.S. Geological Survey, in cooperation with the Georgia Department of Natural Resources, Environmental Protection Division, developed regional regression equations for estimating selected low-flow frequency and mean annual flow statistics for ungaged streams in north Georgia that are not substantially affected by regulation, diversions, or urbanization. Selected low-flow frequency statistics and basin characteristics for 56 streamgage locations within north Georgia and 75 miles beyond the State’s borders in Alabama, Tennessee, North Carolina, and South Carolina were combined to form the final dataset used in the regional regression analysis. Because some of the streamgages in the study recorded zero flow, the final regression equations were developed using weighted left-censored regression analysis to analyze the flow data in an unbiased manner, with weights based on the number of years of record. The set of equations includes the annual minimum 1- and 7-day average streamflow with the 10-year recurrence interval (referred to as 1Q10 and 7Q10), monthly 7Q10, and mean annual flow. The final regional regression equations are functions of drainage area, mean annual precipitation, and relief ratio for the selected low-flow frequency statistics and drainage area and mean annual precipitation for mean annual flow. The average standard error of estimate was 13.7 percent for the mean annual flow regression equation and ranged from 26.1 to 91.6 percent for the selected low-flow frequency equations.The equations, which are based on data from streams with little to no flow alterations, can be used to provide estimates of the natural flows for selected ungaged stream locations in the area of Georgia north of the Fall Line. The regression equations are not to be used to estimate flows for streams that have been altered by the effects of major dams, surface-water withdrawals, groundwater withdrawals (pumping wells), diversions, or wastewater discharges. The regression
Cárdenas-Ayala, V M; Bernal-Pérez, J; Cabrera-Coello, L; Stetler, H C; Pineda-Salgado, J; Guerrero-Reyes, P
1989-01-01
Tuberculosis infection surveys are carried out by tuberculin skin test (Mantoux) which is a simple, cheap, valid and reliable procedure for the estimation of prevalence and incidence rates. In 1987 a survey was undertaken in children of 6-7 years old who attended the elementary school and who were not vaccinated (BCG) in the region of Iguala, México. Out of 6,095 children of such age group, just 531 were not vaccinated, thus the prevalence figure was 2.5% (CL05 = 0.1%, 5.3%). On the basis of the findings by Izaguirre et al, 26 years ago, who reported that about 10% of the children of this age group were infected, it can be estimated that the annual risk of infection is about three newly infected each year per 1,000 population. It is necessary to provide better estimates of the whole tuberculosis incidence rate.
Abdallah, Saeed; Psaromiligkos, Ioannis N.
2012-03-01
We analyze the mean-squared error (MSE) performance of widely linear (WL) and conventional subspace-based channel estimation for single-input multiple-output (SIMO) flat-fading channels employing binary phase-shift-keying (BPSK) modulation when the covariance matrix is estimated using a finite number of samples. The conventional estimator suffers from a phase ambiguity that reduces to a sign ambiguity for the WL estimator. We derive closed-form expressions for the MSE of the two estimators under four different ambiguity resolution scenarios. The first scenario is optimal resolution, which minimizes the Euclidean distance between the channel estimate and the actual channel. The second scenario assumes that a randomly chosen coefficient of the actual channel is known and the third assumes that the one with the largest magnitude is known. The fourth scenario is the more realistic case where pilot symbols are used to resolve the ambiguities. Our work demonstrates that there is a strong relationship between the accuracy of ambiguity resolution and the relative performance of WL and conventional subspace-based estimators, and shows that the less information available about the actual channel for ambiguity resolution, or the lower the accuracy of this information, the higher the performance gap in favor of the WL estimator.
Estimated long-term fish and shellfish intake--national health and nutrition examination survey.
Tran, Nga L; Barraj, Leila M; Bi, Xiaoyu; Schuda, Laurie C; Moya, Jacqueline
2013-03-01
Usual intake estimates describe long-term average intake of food and nutrients and food contaminants. The frequencies of fish and shellfish intake over a 30-day period from National Health and Examination Survey (NHANES 1999-2006) were combined with 24-h dietary recall data from NHANES 2003-2004 using a Monte Carlo procedure to estimate the usual intake of fish and shellfish in this study. Usual intakes were estimated for the US population including children 1 to fish intake (consumers only) was highest among children 1 to fish, salmon, and mackerel. Among children and teenage consumers, tuna, salmon, and breaded fish were the most frequently consumed fish; shrimp, scallops, and crabs were the most frequently consumed shellfish. The intake estimates from this study better reflect long-term average intake rates and are preferred to assess long-term intake of nutrients and possible exposure to environmental contaminants from fish and shellfish sources than 2-day average estimates.
Dietary intake estimates from the California Health Interview Survey (CHIS) Fruit and Vegetable Screener are rough estimates of usual intake of fruits and vegetables. They are not as accurate as more detailed methods.
Adaptive external torque estimation by means of tracking a Lyapunov function
Energy Technology Data Exchange (ETDEWEB)
Schaub, H.; Junkins, J.L. [Texas A and M Univ., College Station, TX (United States); Robinett, R.D. [Sandia National Labs., Albuquerque, NM (United States)
1996-03-01
A real-time method is presented to adoptively estimate three-dimensional unmodeled external torques acting on a spacecraft. This is accomplished by forcing the tracking error dynamics to follow the Lyapunov function underlying the feedback control law. For the case where the external torque is constant, the tracking error dynamics are shown to converge asypmtotically. The methodology applies not only to the control law used in this paper, but can also be applied to most Lyapunov derived feedback control laws. The adaptive external torque estimation is very robust in the presence of measurement noise, since a numerical integration is used instead of a numerical differentiation. Spacecraft modeling errors, such as in the inertia matrix, are also compensated for by this method. Several examples illustrate the practical significance of these ideas.
Gabermann, V
1978-02-01
Oscillographic polarography has been applied for the mescaline and pellotine estimation. These alkaloids produce in 0.5 N NaOH electrolyte a sharp peak within the cathode region of the oscillogram, each of them showing different potential. It makes possible to estimate them at a concentration of 5.10(-6) g/ml. All the forms of Lophophora williamsii were found to contain mescaline and lower content of pellothine, L. jourdaniana--to have equal content of both alkaloide, L. diffusa and L. fricii--to contain pellotine and only traces of mescaline. Plants grown in the greenhouse accumulated the same amount of alkaloids as native plants. Grafting on roodstock which does not produce essential amount of the alkaloids, does not affect the ability of Lophophora to synthesize mescaline and pellotine.
Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features
DEFF Research Database (Denmark)
Jensen, Jesper; Tan, Zheng-Hua
2015-01-01
-) MFCC’s, autoregressive-moving-average (ARMA)-filtered CMSMFCC’s, velocity, and acceleration coefficients. In addition, the method is easily modified to take into account other compressive non-linearities than the logarithm traditionally used for MFCC computation. In terms of MFCC estimation performance......-of-the-art MFCC feature enhancement algorithms within this class of algorithms, while theoretically suboptimal or based on theoretically inconsistent assumptions, perform close to optimally in the MMSE sense....
Models of venous return and their application to estimate the mean systemic filling pressure
Hartog, E.A. den
2007-01-01
Mean systemic filling pressure is the equilibrium pressure in the systemic circulation when the heart is arrested and there is no flow. This pressure is a measure of the stressed volume of the systemic circulation and regarded as the driving pressure for the venous return during steady states [1-3].
Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization
Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)
2016-01-01
textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main
Models of venous return and their application to estimate the mean systemic filling pressure
E.A. den Hartog (Emiel)
1997-01-01
textabstractMean systemic filling pressure is the equilibrium pressure in the systemic circulation when the heart is arrested and there is no flow. This pressure is a measure of the stressed volume of the systemic circulation and regarded as the driving pressure for the venous return during steady s
Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization
Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)
2016-01-01
textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main fac
Models of venous return and their application to estimate the mean systemic filling pressure
E.A. den Hartog (Emiel)
1997-01-01
textabstractMean systemic filling pressure is the equilibrium pressure in the systemic circulation when the heart is arrested and there is no flow. This pressure is a measure of the stressed volume of the systemic circulation and regarded as the driving pressure for the venous return during steady
Models of venous return and their application to estimate the mean systemic filling pressure
Hartog, E.A. den
2007-01-01
Mean systemic filling pressure is the equilibrium pressure in the systemic circulation when the heart is arrested and there is no flow. This pressure is a measure of the stressed volume of the systemic circulation and regarded as the driving pressure for the venous return during steady states [1-3].
Models of venous return and their application to estimate the mean systemic filling pressure
E.A. den Hartog (Emiel)
1997-01-01
textabstractMean systemic filling pressure is the equilibrium pressure in the systemic circulation when the heart is arrested and there is no flow. This pressure is a measure of the stressed volume of the systemic circulation and regarded as the driving pressure for the venous return during steady s
Estimating ice-melange properties with repeat UAV surveys over Store Glacier, West Greenland
Toberg, Nick; Ryan, Johnny; Christoffersen, Poul; Snooke, Neal; Todd, Joe; Hubbard, Alun
2016-04-01
observed melange height with the model of hydrostatic equilibrium, we estimate the mean thickness to be 126 m. Whereas the mean melange elevation did not change appreciably in our study area, from the date observations started on 13 May until it disintegrated 4-8 June, we found daily melange elevation change up to 140 % of the observed mean value when tabular icebergs were added to it. Observations showed this increase in melange thickness halted calving and that calving did not resume until the melange had thinned and returned to the observed mean value. We found the mean daily speed of the melange to be 46 m/day, from 13 May to 4 June, whereas the terminus of the glacier flowed with a mean daily velocity of 16 m/day while the melange was present. The higher mean speed of the melange is explained by the motion of large tabular icebergs, which travelled hundreds of metres into the fjord over the course of a single day. The imagery collected over Store Glacier provide evidence that large tidewater glaciers are stabilized by proglacial ice mélange forming in winter. When melange was present, large calving events strengthened melange by adding to its overall thickness distribution, stopping calving altogether for up to several days following a large calving event, and slowing the flow of the glacier to half of the speed observed the previous day. When the melange was advected suddenly down the fjord, with no apparent weakening, the glacier responded by increasing both flow speed and calving rate simultaneously. The data produced from repeat UAV surveys clearly demonstrates the potential of this new and rapidly advancing method of data collection.
What We Mean by Scope and Methods: A Survey of Undergraduate Scope and Methods Courses
Turner, Charles C.; Thies, Cameron G.
2009-01-01
Self-reflective political scientists have extensively reviewed the history of the discipline and argued over its future, but to date there has been little effort to systematically survey undergraduate scope and methods courses (for an exception see Thies and Hogan 2005). This lack of data leaves the discipline unable to assess how much we are…
DEFF Research Database (Denmark)
Skovgård Olsen, Anders; Zhou, Qianqian; Linde, Jens Jørgen
Estimating the expected annual damage (EAD) due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how it will be affected by climate change and how much can be gained by implementing adaptation......-linear relation which could be contributed by the Danish design standards for drainage systems. Three different methods for estimating the EAD were tested and the choice of method is less important than accounting for the log-linear shift. This then also means that the statistical approximation of the EAD used...... in previous studies appears to be valid and is a good assumption. The EAD estimation can be simplified by having a single unit cost per flooded area which is multiplied with the extent of the flood. It does however depend on the lower threshold chosen in the estimation of the flood extent....
Directory of Open Access Journals (Sweden)
Brion Philippe
2015-12-01
Full Text Available Using as much administrative data as possible is a general trend among most national statistical institutes. Different kinds of administrative sources, from tax authorities or other administrative bodies, are very helpful material in the production of business statistics. However, these sources often have to be completed by information collected through statistical surveys. This article describes the way Insee has implemented such a strategy in order to produce French structural business statistics. The originality of the French procedure is that administrative and survey variables are used jointly for the same enterprises, unlike the majority of multisource systems, in which the two kinds of sources generally complement each other for different categories of units. The idea is to use, as much as possible, the richness of the administrative sources combined with the timeliness of a survey, even if the latter is conducted only on a sample of enterprises. One main issue is the classification of enterprises within the NACE nomenclature, which is a cornerstone variable in producing the breakdown of the results by industry. At a given date, two values of the corresponding code may coexist: the value of the register, not necessarily up to date, and the value resulting from the data collected via the survey, but only from a sample of enterprises. Using all this information together requires the implementation of specific statistical estimators combining some properties of the difference estimators with calibration techniques. This article presents these estimators, as well as their statistical properties, and compares them with those of other methods.
Directory of Open Access Journals (Sweden)
D. C. Nath
2014-01-01
Full Text Available Immunization currently averts an estimated 2-3 million deaths every year in all age groups. Hepatitis B is a major public health problem worldwide. In this study, the estimates of hepatitis B vaccine coverage are compared among three sampling plans namely, 30×30 sampling and 30×7 sampling method under cluster sampling and systematic random sampling schemes. The data has been taken from the survey “Comparison of Two Survey Methodologies to Estimate Total Vaccination Coverage” sponsored by Indian Council of Medical Research, New Delhi. It is observed that the estimations of proportions of this vaccination coverage are significantly not different at 5% level of probability. Both 30×30 sampling and 30×7 sampling will be preferred to systematic sampling in estimation of hepatitis B vaccine coverage for this study population because of quick estimation and lesser cost. The 30×7 cluster sampling is the most recommended method for such immunization coverage especially in a developing country.
Distance software: design and analysis of distance sampling surveys for estimating population size.
Thomas, Len; Buckland, Stephen T; Rexstad, Eric A; Laake, Jeff L; Strindberg, Samantha; Hedley, Sharon L; Bishop, Jon Rb; Marques, Tiago A; Burnham, Kenneth P
2010-02-01
1.Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance.2.We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use.3.Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated.4.A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark-recapture distance sampling, which relaxes the assumption of certain detection at zero distance.5.All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap.6.Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modelling analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software.7.Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods
DEFF Research Database (Denmark)
Knudsen, Per; Bingham, R.; Andersen, Ole Baltazar
2011-01-01
The Gravity and steady-state Ocean Circulation Explorer (GOCE) satellite mission measures Earth’s gravity field with an unprecedented accuracy at short spatial scales. In doing so, it promises to significantly advance our ability to determine the ocean’s general circulation. In this study......, an initial gravity model from GOCE, based on just 2 months of data, is combined with the recent DTU10MSS mean sea surface to construct a global mean dynamic topography (MDT) model. The GOCE MDT clearly displays the gross features of the ocean’s steady-state circulation. More significantly, the improved...... gravity model provided by the GOCE mission has enhanced the resolution and sharpened the boundaries of those features compared with earlier satellite only solutions. Calculation of the geostrophic surface currents from the MDT reveals improvements for all of the ocean’s major current systems. In the North...
Enhanced Mean Dynamic Topography And Ocean Circulation Estimation Using Goce Preliminary Mode
DEFF Research Database (Denmark)
Knudsen, Per; Bingham, Rory; Andersen, Ole Baltazar
2011-01-01
The Gravity and Ocean Circulation Experiment - GOCE satellite mission measure the Earth gravity field with unprecedented accuracy leading to substantial improvements in the modelling of the ocean circulation and transport. In this study of the performance of GOCE, the new preliminary gravity models...... have been combined with the recent DNSC08MSS mean sea surface model to construct a global GOCE satellite-only mean dynamic topography model. At a first glance, the GOCE MDT display the well known features related to the major ocean current systems. A closer look, however, reveals that the improved...... gravity provided by the GOCE mission has enhanced the resolution and sharpened the boundary of those features. A computation of MDT slopes clearly displays the improvements in the description of the current systems. In the North Atlantic Ocean, the Gulf Stream is very well defined and the Labrador...
Estimation of mean grain size of seafloor sediments using neural network
Digital Repository Service at National Institute of Oceanography (India)
De, C.; Chakraborty, B.
sensing by acoustic means has long been recognized as a rapid and cost-effective method for characterization and classification of seafloor sediments over a wide area of interest. The acoustic remote sensing essentially relies on the backscatter strength... backscatter data, obtained from common seafloor depth measurements equipments such as single-beam and multi-beam echo sounders, could be used for this purpose. A number of approaches concerning the characterization and classification of seafloor...
Indian Academy of Sciences (India)
M El Hamma; R Daher
2014-05-01
Using a generalized spherical mean operator, we define generalized modulus of smoothness in the space $L^2_k(\\mathbb{R}^d)$. Based on the Dunkl operator we define Sobolev-type space and -functionals. The main result of the paper is the proof of the equivalence theorem for a -functional and a modulus of smoothness for the Dunkl transform on $\\mathbb{R}^d$.
Representation of Value and Estimation Meanings in the Terminology of Railroad Sublanguage
Directory of Open Access Journals (Sweden)
Asima M. Turekhanova
2014-03-01
Full Text Available The article provides a retrospective analysis of the linguists’ views on the technical term, and discusses linguistic means of expression, used in the “railroad” sublanguage within axiological linguistics. Each artificially created artefact is the result of appraising and intellectual human activity and as a product of this activity reflects the value – fragment of the technical picture of the world.
On the estimation of the magnetocaloric effect by means of microwave technique
Directory of Open Access Journals (Sweden)
Pavlo Aleshkevych
2012-12-01
Full Text Available The method based on low-field microwave absorption measurements is presented to estimate the relative change of entropy with magnetic field. This method is illustrated on both the polycrystalline Gd5Si2Ge2 alloy and the single crystalline La0.7Ca0.3MnO3 manganite. It is shown that there is the simple functional relation between magnetization and non-resonant absorption over a narrow temperature range near the magnetic phase transition. The magnetoresistance is assumed to be the dominating mechanism underlying this relation.
Estimating the North Atlantic mean dynamic topography and geostrophic currents with GOCE
DEFF Research Database (Denmark)
Bingham, Rory J.; Knudsen, Per; Andersen, Ole Baltazar
2011-01-01
be derived from them. Because the high degree commission errors of all of the GOCE models are lower than those from the best satellite only GRACE solution, all of the derived GOCE MDTs are much less noisy than the GRACE MDT They therefore require less severe filtering and, as a consequence, the strength...... of the currents calculated from them are in better agreement with those from an in-situ drifter based estimate. Where the comparison is possible, the reduction in MDT noise from the first to second releases is also clear. However, given that some filtering is still required, this translates into only a small...
The ALHAMBRA survey: Estimation of the clustering signal encoded in the cosmic variance
López-Sanjuan, C.; Cenarro, A. J.; Hernández-Monteagudo, C.; Arnalte-Mur, P.; Varela, J.; Viironen, K.; Fernández-Soto, A.; Martínez, V. J.; Alfaro, E.; Ascaso, B.; del Olmo, A.; Díaz-García, L. A.; Hurtado-Gil, Ll.; Moles, M.; Molino, A.; Perea, J.; Pović, M.; Aguerri, J. A. L.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Castander, F. J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; González Delgado, R. M.; Husillos, C.; Infante, L.; Márquez, I.; Masegosa, J.; Prada, F.; Quintana, J. M.
2015-10-01
Aims: The relative cosmic variance (σv) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the σv measured in the ALHAMBRA survey. Methods: We measure the cosmic variance of several galaxy populations selected with B-band luminosity at 0.35 ≤ zCSIC).
Levels of six antibiotics used in China estimated by means of wastewater-based epidemiology.
Yuan, Su-Fen; Liu, Ze-Hua; Huang, Ri-Ping; Yin, Hua; Dang, Zhi
2016-01-01
Due to lack of proper regulation, information about antibiotics consumption in many countries such as China is difficult to obtain. In this study, a simple method based on wastewater-based epidemiology was adopted to estimate their usage in four megacities of China. Six antibiotics (norfloxacin, ofloxacin, sulfamethoxazole, trimethoprim, erythromycin and roxithromycin), which are the most frequently consumed antibiotics in China, were selected as the targets. Based on our results, Chongqing had the largest total annual consumption of the selected six antibiotics among the four megacities, followed by Guangzhou, then Hong Kong, with Beijing having the least, with values of 4.4 g/y/P, 4.0 g/y/P, 1.6 g/y/P, and 1.3 g/y/P, respectively. Compared with the daily consumption per capita in Italy, the estimated consumption levels of the selected six antibiotics in four cities of China were 12-41 times those of Italy. Our results suggested that the consumption of antibiotics in China was excessive.
Low Complexity Sparse Bayesian Learning for Channel Estimation Using Generalized Mean Field
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri
2014-01-01
constrain the auxiliary function approximating the posterior probability density function of the unknown variables to factorize over disjoint groups of contiguous entries in the sparse vector - the size of these groups dictates the degree of complexity reduction. The original high-complexity algorithms......We derive low complexity versions of a wide range of algorithms for sparse Bayesian learning (SBL) in underdetermined linear systems. The proposed algorithms are obtained by applying the generalized mean field (GMF) inference framework to a generic SBL probabilistic model. In the GMF framework, we...
Energy Technology Data Exchange (ETDEWEB)
Nimwegen, Frederika A. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Cutter, David J. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford (United Kingdom); Schaapveld, Michael [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Rutten, Annemarieke [Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Kooijman, Karen [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Krol, Augustinus D.G. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Janus, Cécile P.M. [Department of Radiation Oncology, Erasmus MC Cancer Center, Rotterdam (Netherlands); Darby, Sarah C. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Leeuwen, Flora E. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Aleman, Berthe M.P., E-mail: b.aleman@nki.nl [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam (Netherlands)
2015-05-01
Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor
van Nimwegen, Frederika A; Cutter, David J; Schaapveld, Michael; Rutten, Annemarieke; Kooijman, Karen; Krol, Augustinus D G; Janus, Cécile P M; Darby, Sarah C; van Leeuwen, Flora E; Aleman, Berthe M P
2015-05-01
To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case-control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor-intensive representative CT-based method. This simpler method may produce a
Norberg, Peder; Gaztanaga, Enrique; Croton, Darren J
2008-01-01
We present a test of different error estimators for 2-point clustering statistics, appropriate for present and future large galaxy redshift surveys. Using an ensemble of very large dark matter LambdaCDM N-body simulations, we compare internal error estimators (jackknife and bootstrap) to external ones (Monte-Carlo realizations). For 3-dimensional clustering statistics, we find that none of the internal error methods investigated are able to reproduce neither accurately nor robustly the errors of external estimators on 1 to 25 Mpc/h scales. The standard bootstrap overestimates the variance of xi(s) by ~40% on all scales probed, but recovers, in a robust fashion, the principal eigenvectors of the underlying covariance matrix. The jackknife returns the correct variance on large scales, but significantly overestimates it on smaller scales. This scale dependence in the jackknife affects the recovered eigenvectors, which tend to disagree on small scales with the external estimates. Our results have important implic...
Directory of Open Access Journals (Sweden)
Bausewein Claudia
2007-11-01
Full Text Available Abstract Background The construct "meaning-in-life" (MiL has recently raised the interest of clinicians working in psycho-oncology and end-of-life care and has become a topic of scientific investigation. Difficulties regarding the measurement of MiL are related to the various theoretical and conceptual approaches and its inter-individual variability. Therefore the "Schedule for Meaning in Life Evaluation" (SMiLE, an individualized instrument for the assessment of MiL, was developed. The aim of this study was to evaluate MiL in a representative sample of the German population. Methods In the SMiLE, the respondents first indicate a minimum of three and maximum of seven areas which provide meaning to their life before rating their current level of importance and satisfaction of each area. Indices of total weighting (IoW, range 20–100, total satisfaction (IoS, range 0–100, and total weighted satisfaction (IoWS, range 0–100 are calculated. Results In July 2005, 1,004 Germans were randomly selected and interviewed (inclusion rate, 85.3%. 3,521 areas of MiL were listed and assigned to 13 a-posteriori categories. The mean IoS was 81.9 ± 15.1, the mean IoW was 84.6 ± 11.9, and the mean IoWS was 82.9 ± 14.8. In youth (16–19 y/o, "friends" were most important for MiL, in young adulthood (20–29 y/o "partnership", in middle adulthood (30–39 y/o "work", during retirement (60–69 y/o "health" and "altruism", and in advanced age (70 y/o and more "spirituality/religion" and "nature experience/animals". Conclusion This study is a first nationwide survey on individual MiL in a randomly selected, representative sample. The MiL areas of the age stages seem to correspond with Erikson's stages of psychosocial development.
Luy, Marc
2012-05-01
In general, the use of indirect methods is limited to developing countries. Developed countries are usually assumed to have no need to apply such methods because detailed demographic data exist. However, the potentialities of demographic analysis with direct methods are limited to the characteristics of available macro data on births, deaths, and migration. For instance, in many Western countries, official population statistics do not permit the estimation of mortality by socioeconomic status (SES) or migration background, or for estimating the relationship between parity and mortality. In order to overcome these shortcomings, I modify and extend the so-called orphanhood method for indirect estimation of adult mortality from survey information on maternal and paternal survival to allow its application to populations of developed countries. The method is demonstrated and tested with data from two independent Italian cross-sectional surveys by estimating overall and SES-specific life expectancy. The empirical applications reveal that the proposed method can be used successfully for estimating levels and trends of mortality differences in developed countries and thus offers new prospects for the analysis of mortality.
Wollenberg, H A; Revzan, K L; Smith, A R
1994-01-01
We examined the applicability of radioelement data from the National Aerial Radiometric Reconnaissance, an element of the National Uranium Resource Evaluation, to estimate terrestrial gamma-ray absorbed dose rates, by comparing dose rates calculated from aeroradiometric surveys of uranium, thorium, and potassium concentrations with dose rates calculated from a radiogeologic data base and the distribution of lithologies in California. Gamma-ray dose rates increase generally from north to south following lithological trends, with low values of 25-30 nGy h-1 in the northernmost 1 x 2 degrees quadrangles between 41 and 42 degrees N to high values of 75-100 nGy h-1 in southeastern California. Lithologic-based estimates of mean dose rates in the quadrangles generally match those from aeroradiometric data, with statewide means of 63 and 60 nGy h-1, respectively. These are intermediate between a population-weighted global average of 51 nGy h-1 reported in 1982 by UNSCEAR and a weighted continental average of 70 nGy h-1, based on the global distribution of rock types. The concurrence of lithologically and aeroradiometrically determined dose rates in California, with its varied geology and topography encompassing settings representative of the continents, indicates that the National Aerial Radiometric Reconnaissance data are applicable to estimates of terrestrial absorbed dose rates from natural gamma emitters.
On the choice of statistical models for estimating occurrence and extinction from animal surveys
Dorazio, R.M.
2007-01-01
In surveys of natural animal populations the number of animals that are present and available to be detected at a sample location is often low, resulting in few or no detections. Low detection frequencies are especially common in surveys of imperiled species; however, the choice of sampling method and protocol also may influence the size of the population that is vulnerable to detection. In these circumstances, probabilities of animal occurrence and extinction will generally be estimated more accurately if the models used in data analysis account for differences in abundance among sample locations and for the dependence between site-specific abundance and detection. Simulation experiments are used to illustrate conditions wherein these types of models can be expected to outperform alternative estimators of population site occupancy and extinction. ?? 2007 by the Ecological Society of America.
Estimating the relevance of predictions from nuclear mean-field models
Reinhard, P -G
2015-01-01
This contribution reviews the present status of the Skyrme-Hartree-Fock (SHF) approach as one of the leading self-consistent mean-field models in the physics of atomic nuclei. It starts with a brief summary of the formalism and strategy for proper calibration of the SHF functional. The main emphasis lies on an exploration of the reliability of predictions, particularly in the regime of extrapolations. Various strategies are discussed to explore the statistical and systematic errors of SHF. The strategies are illustrated on examples from actual applications. Variations of model and fit data are used to get an idea about systematic errors. The statistical error is evaluated in straightforward manner by statistical analysis based on $\\chi^2$ fits. This also allows also to evaluate the correlations (covariances) between observables which provides useful insights into the structure of the model and of the fitting strategy.
Murphy, Louise B; Cisternas, Miriam G; Greenlund, Kurt J; Giles, Wayne; Hannan, Casey; Helmick, Charles G
2017-03-01
To determine the variability of arthritis prevalence in 4 US population health surveys. We estimated annualized arthritis prevalence in 2011-2012, among adults age ≥20 years, using 2 definition methods, both based on self-report: 1) doctor-/health care provider-diagnosed arthritis in the Behavioral Risk Factor Surveillance Survey (BRFSS), National Health and Nutrition Examination Survey (NHANES), National Health Interview Survey (NHIS), and Medical Expenditure Panel Survey (MEPS); and 2) three arthritis definitions based on International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) criteria in MEPS (National Arthritis Data Workgroup on Arthritis and Other Rheumatic Conditions [NADW-AORC], Clinical Classifications Software [CCS], and Centers for Disease Control and Prevention [CDC]). Diagnosed arthritis prevalence percentages using the surveys were within 3 points of one another (BRFSS 26.2% [99% confidence interval (99% CI) 26.0-26.4], MEPS 26.1% [99% CI 25.0-27.2], NHIS 23.5% [99% CI 22.9-24.1], NHANES 23.0% [99% CI 19.2-26.8]), and those using ICD-9-CM were within 5 percentage points of one another (CCS 25.8% [99% CI 24.6-27.1]; CDC 28.3% [99% CI 27.0-29.6]; and NADW-AORC 30.7% [99% CI 29.4-32.1]). The variation in the estimated number (in millions) affected with diagnosed arthritis was 7.8 (BRFSS 58.5 [99% CI 58.1-59.1], MEPS 59.3 [99% CI 55.6-63.1], NHANES 51.5 [99% CI 37.2-65.5], and NHIS 52.6 [99% CI 50.9-54.4]), and using ICD-9-CM definitions it was 11.1 (CCS 58.7 [99% CI 54.5-62.9], CDC 64.3 [99% CI 59.9-68.6], and NADW 69.9 [99% CI 65.2-74.5]). Most (57-70%) reporting diagnosed arthritis also reported ICD-9-CM arthritis; respondents reporting diagnosed arthritis were older than those meeting ICD-9-CM definitions. Proxy response status affected arthritis prevalence differently across surveys. Public health practitioners and decision makers are frequently charged with choosing a single number to represent arthritis
Energy Technology Data Exchange (ETDEWEB)
Zhang, Peng; Zhou, Ning; Abdollahi, Ali
2013-09-10
A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.
Conn, Paul B.; Johnson, Devin S.; Ver Hoef, Jay M.; Hooten, Mevin B.; London, Joshua M.; Boveng, Peter L.
2015-01-01
Ecologists often fit models to survey data to estimate and explain variation in animal abundance. Such models typically require that animal density remains constant across the landscape where sampling is being conducted, a potentially problematic assumption for animals inhabiting dynamic landscapes or otherwise exhibiting considerable spatiotemporal variation in density. We review several concepts from the burgeoning literature on spatiotemporal statistical models, including the nature of the temporal structure (i.e., descriptive or dynamical) and strategies for dimension reduction to promote computational tractability. We also review several features as they specifically relate to abundance estimation, including boundary conditions, population closure, choice of link function, and extrapolation of predicted relationships to unsampled areas. We then compare a suite of novel and existing spatiotemporal hierarchical models for animal count data that permit animal density to vary over space and time, including formulations motivated by resource selection and allowing for closed populations. We gauge the relative performance (bias, precision, computational demands) of alternative spatiotemporal models when confronted with simulated and real data sets from dynamic animal populations. For the latter, we analyze spotted seal (Phoca largha) counts from an aerial survey of the Bering Sea where the quantity and quality of suitable habitat (sea ice) changed dramatically while surveys were being conducted. Simulation analyses suggested that multiple types of spatiotemporal models provide reasonable inference (low positive bias, high precision) about animal abundance, but have potential for overestimating precision. Analysis of spotted seal data indicated that several model formulations, including those based on a log-Gaussian Cox process, had a tendency to overestimate abundance. By contrast, a model that included a population closure assumption and a scale prior on total
Harris, Robin B.; Burgess, Jefferey L; Maria Mercedes Meza-Montenegro; Luis Enrique Gutiérrez-Millán; Mary Kay O’Rourke; Jason Roberge
2012-01-01
The Binational Arsenic Exposure Survey (BAsES) was designed to evaluate probable arsenic exposures in selected areas of southern Arizona and northern Mexico, two regions with known elevated levels of arsenic in groundwater reserves. This paper describes the methodology of BAsES and the relationship between estimated arsenic intake from beverages and arsenic output in urine. Households from eight communities were selected for their varying groundwater arsenic concentrations in Arizona, USA and...
Kamousi, Baharan; Amini, Ali Nasiri; He, Bin
2007-06-01
The goal of the present study is to employ the source imaging methods such as cortical current density estimation for the classification of left- and right-hand motor imagery tasks, which may be used for brain-computer interface (BCI) applications. The scalp recorded EEG was first preprocessed by surface Laplacian filtering, time-frequency filtering, noise normalization and independent component analysis. Then the cortical imaging technique was used to solve the EEG inverse problem. Cortical current density distributions of left and right trials were classified from each other by exploiting the concept of Von Neumann entropy. The proposed method was tested on three human subjects (180 trials each) and a maximum accuracy of 91.5% and an average accuracy of 88% were obtained. The present results confirm the hypothesis that source analysis methods may improve accuracy for classification of motor imagery tasks. The present promising results using source analysis for classification of motor imagery enhances our ability of performing source analysis from single trial EEG data recorded on the scalp, and may have applications to improved BCI systems.
An empirical method to estimate the viscosity of mineral oil by means of ultrasonic attenuation.
Ju, Hyeong; Gottlieb, Emanuel; Augenstein, Donald; Brown, Gregor; Tittmann, Bernhard
2010-07-01
This paper presents an empirical method for measuring the viscosity of mineral oil. In a built-in pipeline application, conventional ultrasonic methods using shear reflectance or rheological and acoustical phenomena may fail because of attenuated shear wave propagation and an unpredictable spreading loss caused by protective housings and comparable main flows. The empirical method utilizing longitudinal waves eliminates the unknown spreading loss from attenuation measurements on the object fluid by removing the normalized spreading loss per focal length with the measurement of a reference fluid of a known acoustic absorption coefficient. The ultrasonic attenuation of fresh water as the reference fluid and mineral oil as the object fluid were measured along with the sound speed and effective frequency. The empirical equation for the spreading loss in the reference fluid is determined by high-order polynomial fitting. To estimate the shear viscosity of the mineral oil, a linear fit is applied to the total loss difference between the two fluids, whose slope (the absorption coefficient) is combined with an assumed shear-to-volume viscosity relation. The empirical method predicted the viscosities of two types of the mineral oil with a maximum statistical uncertainty of 8.8% and a maximum systematic error of 12.5% compared with directly measured viscosity using a glass-type viscometer. The validity of this method was examined by comparison with the results from theoretical far-field spreading.
Deterioration estimation of paintings by means of combined 3D and hyperspectral data analysis
Granero-Montagud, Luís.; Portalés, Cristina; Pastor-Carbonell, Begoña.; Ribes-Gómez, Emilio; Gutiérrez-Lucas, Antonio; Tornari, Vivi; Papadakis, Vassilis; Groves, Roger M.; Sirmacek, Beril; Bonazza, Alessandra; Ozga, Izabela; Vermeiren, Jan; van der Zanden, Koen; Föster, Matthias; Aswendt, Petra; Borreman, Albert; Ward, Jon D.; Cardoso, António; Aguiar, Luís.; Alves, Filipa; Ropret, Polonca; Luzón-Nogué, José María.; Dietz, Christian
2013-05-01
Deterioration of artwork, in particular paintings, can be produced by environmental factors such as temperature fluctuations, relative humidity variations, ultraviolet radiation and biological factors among others. The effects of these parameters produce changes in both the painting structure and chemical composition. While well established analytical methodologies, such as those based in Raman Spectroscopy and FTIR Spectroscopy require the extraction of a sample for its inspection, other approaches such as hyperspectral imaging and 3D scanning present advantages for in-situ, noninvasive analysis of artwork. In this paper we introduce a novel system and the related methodology to acquire process, generate and analyze 4D data of paintings. Our system is based on non-contact techniques and is used to develop analytical tools which extract rich 3D and hyperspectral maps of the objects, which are processed to obtain accurate quantitative estimations of the deterioration and degradation present in the piece of art. In particular, the construction of 4D data allows the identification of risk maps on the painting representation, which can allow the curators and restorers in the task of painting state evaluation and prioritize intervention actions.
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
A present massage chair realizes the massage motion and force designed by a professional masseur. However, appropriate massage force to the user can not be provided by the massage chair in such a method. On the other hand, the professional masseur can realize an appropriate massage force to more than one patient, because, the masseur considers the physical condition of the patient. Our research proposed the intelligent massage system of applying masseur's procedure for the massage chair using estimated skin elasticity and DB to relate skin elasticity and massage force. However, proposed system has a problem that DB does not adjust to unknown user, because user's feeling by massage can not be estimated. Then, this paper proposed the estimation method of comfortable/uncomfortable feeling based on EEG using the neural network and k-means algorithm. The realizability of the proposed method is verified by the experimental works.
Evaluation of estimation methods for meiofaunal biomass from a meiofaunal survey in Bohai Bay
Institute of Scientific and Technical Information of China (English)
张青田; 王新华; 胡桂坤
2010-01-01
Studies in the coastal area of Bohai Bay,China,from July 2006 to October 2007,suggest that the method of meiofaunal biomass estimation affected the meiofaunal analysis.Conventional estimation methods that use a unique mean individual weight value for nematodes to calculate total biomass may cause deviation of the results.A modified estimation method,named the Subsection Count Method (SCM),was also used to calculate meiofaunal biomass.This entails only a slight increase in workload but generates results of g...
Estimation of change in populations and communities from monitoring survey data
Sauer, J.R.; Link, W.A.; Nichols, J.D.; Busch, David E.; Trexler, Joel C.
2003-01-01
Monitoring surveys provide fundamental information for use in environmental decision making by permitting assessment of both current population (or community) status and change in status, by providing a historical context of the present status, and by documenting response to ongoing management. Conservation of species and communities has historically been based upon monitoring information, and prioritization of species and habitats for conservation action often requires reliable, quantitative results. Although many monitoring programs exist for populations, species, and communities, as well as for biotic and abiotic features of the environment, estimation of population and community change from surveys can sometimes be controversial, and demands on monitoring information have increased greatly in recent years. Information is often required at multiple spatial scales for use in geographic information systems, and information needs exist for description of regional patterns of change in populations, communities, and ecosystems. Often, attempts are made to meet these needs using information collected for other purposes or at inappropriate geographic scales, leading to information that is difficult to analyze and interpret. In this chapter, we address some of the constraints and issues associated with estimating change in wildlife species and species groups from monitoring surveys, and use bird surveys as our primary examples.
Yanxia, Zhang; Nanbo, Peng; Yongheng, Zhao; Xue-bing, Wu
2013-01-01
We apply one of lazy learning methods named k-nearest neighbor algorithm (kNN) to estimate the photometric redshifts of quasars, based on various datasets from the Sloan Digital Sky Survey (SDSS), UKIRT Infrared Deep Sky Survey (UKIDSS) and Wide-field Infrared Survey Explorer (WISE) (the SDSS sample, the SDSS-UKIDSS sample, the SDSS-WISE sample and the SDSS-UKIDSS-WISE sample). The influence of the k value and different input patterns on the performance of kNN is discussed. kNN arrives at the best performance when k is different with a special input pattern for a special dataset. The best result belongs to the SDSS-UKIDSS-WISE sample. The experimental results show that generally the more information from more bands, the better performance of photometric redshift estimation with kNN. The results also demonstrate that kNN using multiband data can effectively solve the catastrophic failure of photometric redshift estimation, which is met by many machine learning methods. By comparing the performance of various m...
Can I Just Check...? Effects of Edit Check Questions on Measurement Error and Survey Estimates
Directory of Open Access Journals (Sweden)
Lugtig Peter
2014-03-01
Full Text Available Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to questions earlier in the same interview to query apparent inconsistencies in responses; dependent interviewing uses responses from prior interviews to query apparent inconsistencies over time.Weuse data from three waves of the British Household Panel Survey (BHPS to assess the effects of edit checks on estimates, and data from an experimental study carried out in the context of the BHPS, where survey responses were linked to individual administrative records, to assess the effects on measurement error. The findings suggest that interviewing methods without edit checks underestimate non-labour household income in the lower tail of the income distribution. The effects on estimates derived from total household income, such as poverty rates or transition rates into and out of poverty, are small.
Survey and analysis of the meaning of "naghsh" in a controversial verse from Hafez
Directory of Open Access Journals (Sweden)
هادی اکبرزاده
2012-03-01
Full Text Available This paper seeks to explain and interpret of the meaning of "naghsh" in a controversial verse from Hafez: داده ام باز نظر را به تذروی پرواز بازخواند مگرش نقش و شکاری بکند Initially, some points about the word "naghsh" raised by earlier commentators have been considered. Many commentators, following the meaning Soodi has given to the word "naghsh", have taken it to mean a term related to hunting. The author has found several reasons to reject this interpretation and has concluded that "naghsh baz khandan" in this verse from Hafez means "to perceive the effect of something or someone", correct perception or understanding of something", etc. and this word has no relevance to hunting. Key word: "Naghsh", "Naghsh baz khandan", Hunting, Curtain, Music, Hafez.
Survey and analysis of the meaning of "naghsh" in a controversial verse from Hafez
Directory of Open Access Journals (Sweden)
هادی اکبرزاده
2012-03-01
Full Text Available This paper seeks to explain and interpret of the meaning of "naghsh" in a controversial verse from Hafez: داده ام باز نظر را به تذروی پرواز بازخواند مگرش نقش و شکاری بکند Initially, some points about the word "naghsh" raised by earlier commentators have been considered. Many commentators, following the meaning Soodi has given to the word "naghsh", have taken it to mean a term related to hunting. The author has found several reasons to reject this interpretation and has concluded that "naghsh baz khandan" in this verse from Hafez means "to perceive the effect of something or someone", correct perception or understanding of something", etc. and this word has no relevance to hunting. Key word: "Naghsh", "Naghsh baz khandan", Hunting, Curtain, Music, Hafez.
Crampton, Lisa H.; Brinck, Kevin W.; Pias, Kyle E.; Heindl, Barbara A. P.; Savre, Thomas; Diegmann, Julia S.; Paxton, Eben
2017-01-01
Accurate estimates of the distribution and abundance of endangered species are crucial to determine their status and plan recovery options, but such estimates are often difficult to obtain for species with low detection probabilities or that occur in inaccessible habitats. The Puaiohi (Myadestes palmeri) is a cryptic species endemic to Kauaʻi, Hawai‘i, and restricted to high elevation ravines that are largely inaccessible. To improve current population estimates, we developed an approach to model distribution and abundance of Puaiohi across their range by linking occupancy surveys to habitat characteristics, territory density, and landscape attributes. Occupancy per station ranged from 0.17 to 0.82, and was best predicted by the number and vertical extent of cliffs, cliff slope, stream width, and elevation. To link occupancy estimates with abundance, we used territory mapping data to estimate the average number of territories per survey station (0.44 and 0.66 territories per station in low and high occupancy streams, respectively), and the average number of individuals per territory (1.9). We then modeled Puaiohi occupancy as a function of two remote-sensed measures of habitat (stream sinuosity and elevation) to predict occupancy across its entire range. We combined predicted occupancy with estimates of birds per station to produce a global population estimate of 494 (95% CI 414–580) individuals. Our approach is a model for using multiple independent sources of information to accurately track population trends, and we discuss future directions for modeling abundance of this, and other, rare species.
The estimation of the pollutant emissions on-board vessels by means of numerical methods
Jenaru, A.; Arsenie, P.; Hanzu-Pazara, R.
2016-08-01
Protection of the environment, especially within the most recent years, has become a constant problem considered by the states and the governments of the world, which are more and more concerned about the serious problems caused by the continuous deterioration of the environment. The long term effects of pollution on the environment generated by the lack of penalty regulations, have directed the attention of statesmen upon the necessity of the elaboration of normative acts meant to be effective in the continuous fight with it. Maritime transportation generates approximately 4% of the total of the CO2 emissions produced by human activities. This paper is intended to present two methods of estimation of the gases emissions on-board a vessel, methods that are very useful for the crews which are exploiting them. For the determination and the validation of these methods we are going to use the determinations from the tank ship. This ship has as a main propulsion engine Wärtsilä DU Sulzer RT Flex 50 - 6 cylinders that develops a maximal power of 9720 kW and has a permanent monitoring system of the pollutant emissions. The methods we develop here are using the values of the polluting elements from the exhaust gases that are determined at the exit of the vessel from the ship yard, in the framework of the acceptance tests. These values have been introduced within the framework of a matrix in the MATHCAD program. This matrix represents the starting point of the two mentioned methods: the analytical method and the graphical method. During the study we are going to evaluate the development and validation of an analytical tool to be used to determine the standard of emissions aimed at thermal machines on ships. One of the main objectives of this article represents an objective assessment of the expediency of using non-fuels for internal combustion engines in vessels.
Directory of Open Access Journals (Sweden)
Chernenkov Yu.V.
2014-12-01
Full Text Available Objective: To examine health of children born by means of ART according to Perinatal Center of Saratov region for the last 2 years. Material and Methods: 70 pregnant women and 96 newborns with the use of ART were under examination. The causes of premature birth by women with ART, high degree of disease incidence among newborns and mortinatality are considered (рассматриваются in the article. Results: The important factors of amnionic membranes breaking are distinguished: maternal age, multifetation, genital and extragenital pathology, antenatal infections and fetation anomaly. Highly premature babies cause not only high neonatal disease rate and disability, but also are key element in procreational loses. Women with congelated embryo transfer less often experience premature birth, also disease rate and neurologic abnormalities are less frequent. Two and more embryo transfer more often leads to habitual miscarriage. Conclusion: Mothers-to-children health indicators after using ART demonstrate the necessity of prenatal diagnostics improvement and taking measures in monitoring such women and newborns. High quality preimplantation preparation plays a major role in fetus pathology reduction. Prescriptions and counter indications to this procedure should be thoroughly evaluated; no more than 1-2 ova should be transferred.
Verucchi, Carlos; Bossio, José; Bossio, Guillermo; Acosta, Gerardo
2016-12-01
In recent years, progress has been made in developing techniques to detect mechanical faults in actuators driven by induction motors. The latest developments show their capability to detect faults from the analysis of the motor electrical variables. The techniques are based on the analysis of the Motor Current Signature Analysis (MCSA) and the Load Torque Signature Analysis (LTSA), among others. Thus, failures such as misalignment between the motor and load, progressive gear teeth wear, and mass imbalances have been successfully detected. In case of misalignment between the motor and load, both angular and radial misalignment, the results presented in literature do not consider the characteristics of the coupling device. In this work, it is studied a mechanism in which the power transmission between the motor and load is performed by means of different types of couplings, mainly those most frequently used in industry. Results show that the conclusions drawn for a particular coupling are not necessarily applicable to others. Finally, this paper presents data of interest for the development of algorithms or expert systems for fault detection and diagnosis.
A Survey on Hadoop Assisted K-Means Clustering of Hefty Volume Images
Directory of Open Access Journals (Sweden)
Anil R Surve
2014-03-01
Full Text Available The objects or the overview of the objects in a remote sensing image can be detected or generated directly through the use of basic K-means clustering method. ENVI, ERDAS IMAGINE are some of the software that can be used to get the work done on PCs. But the hurdle to process the large amount of remote sensing images is limitations of hardware resources and the processing time. The parallel or the distributed computing remains the right choice in such cases. In this paper, the efforts are put to make the algorithm parallel using Hadoop MapReduce, a distributed computing framework which is an open source programming model. The introductory part explains the color representation of remote sensing images. There is a need to convert the RGB pixel values to CIELAB color space which is more suitable for distinguishing colors. The overview of the traditional K-means is provided and in the later part programming model MapReduce and the Hadoop platform for K-Means is described. To achieve this parallelization of the algorithm using the customized MapReduce functions in two stages is essential. The map and reduce functions for the algorithm are described by pseudo-codes. This method will be useful in the many similar situations of remote sensing images.
Estimation of water dam area variations by means of multitemporal remote sensing data
Nikolov, Hristo; Vassilev, Vasil; Borisova, Denitsa; Tsvetkova, Nadya
2014-05-01
In the last decade one of the resources, which is considered to be scarce, having in mind the growing population in global scale is the fresh water. Thus the need for careful planning and use of this resource is more then evident. In order to mitigate the effects of drought and potable water needs water dams are constructed. But together with benefits they provide there is serious flooding risk they pose for the area where they reside. In this research we proposed and tested an approach for water dam area delineation based solely on remotely sensed data. We proved that processing diachronic multispectral optical data from freely available sources and additional ones, such as topographic maps, in-situ data, data from national agencies, etc. we managed to obtain relevant information concerning current and past status of water dam Topolnitsa. The proposed method includes following steps - mulspectral data processing up to reflectance; calculation of widely used water related indices, namely NDWI and MNDWI; creation of mask layer implementing linear spectral unmixing for water area; and finally estimation of the area of the water table and calculation of the volume of the water body. In our previous work testing the pertinence of each spectral band (as well as of a few band ratios) to calculate the turbidity index (results not shown here), the red band was chosen. The best fit of the red band to characterize the turbidity of Danube Delta waters is not truly surprising. Using similar technology as for processing of HR EO data combined with visual interpretation for VHR data and airborne images Remote Sensing Application Center - ReSAC has developed a database for the standing water bodies in Bulgaria. The work continues over 10 years already and as a result more than 11 000 objects are mapped. For each water body a historical record is established on its variation in size during the years based on images available. Those records are organized in GIS database and can be
Li, Zhanqing; Whitlock, Charles H.; Charlock, Thomas P.
1995-01-01
Global sets of surface radiation budget (SRB) have been obtained from satellite programs. These satellite-based estimates need validation with ground-truth observations. This study validates the estimates of monthly mean surface insolation contained in two satellite-based SRB datasets with the surface measurements made at worldwide radiation stations from the Global Energy Balance Archive (GEBA). One dataset was developed from the Earth Radiation Budget Experiment (ERBE) using the algorithm of Li et al. (ERBE/SRB), and the other from the International Satellite Cloud Climatology Project (ISCCP) using the algorithm of Pinker and Laszlo and that of Staylor (GEWEX/SRB). Since the ERBE/SRB data contain the surface net solar radiation only, the values of surface insolation were derived by making use of the surface albedo data contained GEWEX/SRB product. The resulting surface insolation has a bias error near zero and a root-mean-square error (RMSE) between 8 and 28 W/sq m. The RMSE is mainly associated with poor representation of surface observations within a grid cell. When the number of surface observations are sufficient, the random error is estimated to be about 5 W/sq m with present satellite-based estimates. In addition to demonstrating the strength of the retrieving method, the small random error demonstrates how well the ERBE derives from the monthly mean fluxes at the top of the atmosphere (TOA). A larger scatter is found for the comparison of transmissivity than for that of insolation. Month to month comparison of insolation reveals a weak seasonal trend in bias error with an amplitude of about 3 W/sq m. As for the insolation data from the GEWEX/SRB, larger bias errors of 5-10 W/sq m are evident with stronger seasonal trends and almost identical RMSEs.
New Survey Questions and Estimators for Network Clustering with Respondent-Driven Sampling Data
Verdery, Ashton M; Siripong, Nalyn; Abdesselam, Kahina; Bauldry, Shawn
2016-01-01
Respondent-driven sampling (RDS) is a popular method for sampling hard-to-survey populations that leverages social network connections through peer recruitment. While RDS is most frequently applied to estimate the prevalence of infections and risk behaviors of interest to public health, like HIV/AIDS or condom use, it is rarely used to draw inferences about the structural properties of social networks among such populations because it does not typically collect the necessary data. Drawing on recent advances in computer science, we introduce a set of data collection instruments and RDS estimators for network clustering, an important topological property that has been linked to a network's potential for diffusion of information, disease, and health behaviors. We use simulations to explore how these estimators, originally developed for random walk samples of computer networks, perform when applied to RDS samples with characteristics encountered in realistic field settings that depart from random walks. In partic...
A Survey on Hadoop Assisted K-Means Clustering of Hefty Volume Images
Anil R Surve; Nilesh S Paddune
2014-01-01
The objects or the overview of the objects in a remote sensing image can be detected or generated directly through the use of basic K-means clustering method. ENVI, ERDAS IMAGINE are some of the software that can be used to get the work done on PCs. But the hurdle to process the large amount of remote sensing images is limitations of hardware resources and the processing time. The parallel or the distributed computing remains the right choice in such cases. In this paper, the efforts are p...
Automatic sign language analysis: a survey and the future beyond lexical meaning.
Ong, Sylvie C W; Ranganath, Surendra
2005-06-01
Research in automatic analysis of sign language has largely focused on recognizing the lexical (or citation) form of sign gestures as they appear in continuous signing, and developing algorithms that scale well to large vocabularies. However, successful recognition of lexical signs is not sufficient for a full understanding of sign language communication. Nonmanual signals and grammatical processes which result in systematic variations in sign appearance are integral aspects of this communication but have received comparatively little attention in the literature. In this survey, we examine data acquisition, feature extraction and classification methods employed for the analysis of sign language gestures. These are discussed with respect to issues such as modeling transitions between signs in continuous signing, modeling inflectional processes, signer independence, and adaptation. We further examine works that attempt to analyze nonmanual signals and discuss issues related to integrating these with (hand) sign gestures. We also discuss the overall progress toward a true test of sign recognition systems--dealing with natural signing by native signers. We suggest some future directions for this research and also point to contributions it can make to other fields of research. Web-based supplemental materials (appendicies) which contain several illustrative examples and videos of signing can be found at www.computer.org/publications/dlib.
Arcella, D; Leclercq, C
2005-01-01
The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.
Out-of-field activity in the estimation of mean lung attenuation coefficient in PET/MR
Energy Technology Data Exchange (ETDEWEB)
Berker, Yannick, E-mail: yberker@ukaachen.de [Department of Physics of Molecular Imaging Systems, Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen (Germany); Salomon, André [X-Ray Imaging Systems, Philips Research, Eindhoven (Netherlands); Kiessling, Fabian [Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen (Germany); Schulz, Volkmar [Department of Physics of Molecular Imaging Systems, Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen (Germany); Philips Technologie GmbH Innovative Technologies, Research Laboratories, Aachen (Germany)
2014-01-11
In clinical PET/MR, photon attenuation is a source of potentially severe image artifacts. Correction approaches include those based on MR image segmentation, in which image voxels are classified and assigned predefined attenuation coefficients to obtain an attenuation map. In whole-body imaging, however, mean lung attenuation coefficients (LAC) can vary by a factor of 2, and the choice of inappropriate mean LAC can have significant impact on PET quantification. Previously, we proposed a method combining MR image segmentation, tissue classification and Maximum Likelihood reconstruction of Attenuation and Activity (MLAA) to estimate mean LAC values. In this work, we quantify the influence of out-of-field (OOF) accidental coincidences when acquiring data in a single bed position. We therefore carried out GATE simulations of realistic, whole-body activity and attenuation distributions derived from data of three patients. A bias of 15% was found and significantly reduced by removing OOF accidentals from our data, suggesting that OOF accidentals are the major contributor to the bias. We found approximately equal contributions from OOF scatter and OOF randoms, and present results after correction of the bias by rescaling of results. Results using temporal subsets suggest that 30-second acquisitions may be sufficient for estimation mean LAC with less than 5% uncertainty if mean bias can be corrected for. -- Highlights: • Variability of lung attenuation complicates PET attenuation correction in PET/MR. • Maximum Likelihood Reconstruction of Attenuation and Activity combined with MR image segmentation. • GATE simulations of PET acquisitions in a realistic scanner model. • Bias in full-body simulations explained by accidentals from outside the FOV.
Directory of Open Access Journals (Sweden)
Margarita Choulga
2014-03-01
Full Text Available Lakes influence the structure of the atmospheric boundary layer and, consequently, the local weather and local climate. Their influence should be taken into account in the numerical weather prediction (NWP and climate models through parameterisation. For parameterisation, data on lake characteristics external to the model are also needed. The most important parameter is the lake depth. Global database of lake depth GLDB (Global Lake Database is developed to parameterise lakes in NWP and climate modelling. The main purpose of the study is to upgrade GLDB by use of indirect estimates of the mean depth for lakes in boreal zone, depending on their geological origin. For this, Tectonic Plates Map, geological, geomorphologic maps and the map of Quaternary deposits were used. Data from maps were processed by an innovative algorithm, resulting in 141 geological regions where lakes were considered to be of kindred origin. To obtain a typical mean lake depth for each of the selected regions, statistics from GLDB were gained and analysed. The main result of the study is a new version of GLDB with estimations of the typical mean lake depth included. Potential users of the product are NWP and climate models.
Raichoor, A.; Mei, S.; Erben, T.; Hildebrandt, H.; Huertas-Company, M.; Ilbert, O.; Licitra, R.; Ball, N. M.; Boissier, S.; Boselli, A.; Chen, Y.-T.; Côté, P.; Cuillandre, J.-C.; Duc, P. A.; Durrell, P. R.; Ferrarese, L.; Guhathakurta, P.; Gwyn, S. D. J.; Kavelaars, J. J.; Lançon, A.; Liu, C.; MacArthur, L. A.; Muller, M.; Muñoz, R. P.; Peng, E. W.; Puzia, T. H.; Sawicki, M.; Toloba, E.; Van Waerbeke, L.; Woods, D.; Zhang, H.
2014-12-01
The Next Generation Virgo Cluster Survey (NGVS) is an optical imaging survey covering 104 deg2 centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz bands and one third in the r band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point-spread function homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior that extends to i AB = 12.5 mag. When using the u* griz bands, our photometric redshifts for 15.5 mag outliers, a scatter σoutl.rej., and an individual error on z phot that increases with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz bands over the same magnitude and redshift range, the lack of the r band increases the uncertainties in the 0.3 outliers, and z phot.err. ~ 0.15). We also present a joint analysis of the photometric redshift accuracy as a function of redshift and magnitude. We assess the quality of our photometric redshifts by comparison to spectroscopic samples and by verifying that the angular auto- and cross-correlation function w(θ) of the entire NGVS photometric redshift sample across redshift bins is in agreement with the expectations.
Raichoor, A; Erben, T; Hildebrandt, H; Huertas-Company, M; Ilbert, O; Licitra, R; Ball, N M; Boissier, S; Boselli, A; Chen, Y -T; Côté, P; Cuillandre, J -C; Duc, P A; Durrell, P R; Ferrarese, L; Guhathakurta, P; Gwyn, S D J; Kavelaars, J J; Lançon, A; Liu, C; MacArthur, L A; Muller, M; Muñoz, R P; Peng, E W; Puzia, T H; Sawicki, M; Toloba, E; Van Waerbeke, L; Woods, D; Zhang, H
2014-01-01
The Next Generation Virgo Cluster Survey is an optical imaging survey covering 104 deg^2 centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz-bands and one third in the r-band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point spread function-homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior which extends to iAB = 12.5 mag. When using the u*griz-bands, our photometric redshifts for 15.5 \\le i \\lesssim 23 mag or zphot \\lesssim 1 galaxies have a bias |\\Delta z| < 0.02, less than 5% outliers, and a scatter \\sigma_{outl.rej.} and an individual error on zphot that increase with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz-bands over the same magnitude and redshift range, the lack of the r-band increases t...
Estimating occupancy and predicting numbers of gray wolf packs in Montana using hunter surveys
Rich, Lindsey N.; Russell, Robin E.; Glenn, Elizabeth M.; Mitchell, Michael S.; Gude, Justin A.; Podruzny, Kevin M.; Sime, Carolyn A.; Laudon, Kent; Ausband, David E.; Nichols, James D.
2013-01-01
Reliable knowledge of the status and trend of carnivore populations is critical to their conservation and management. Methods for monitoring carnivores, however, are challenging to conduct across large spatial scales. In the Northern Rocky Mountains, wildlife managers need a time- and cost-efficient method for monitoring gray wolf (Canis lupus) populations. Montana Fish, Wildlife and Parks (MFWP) conducts annual telephone surveys of >50,000 deer and elk hunters. We explored how survey data on hunters' sightings of wolves could be used to estimate the occupancy and distribution of wolf packs and predict their abundance in Montana for 2007–2009. We assessed model utility by comparing our predictions to MFWP minimum known number of wolf packs. We minimized false positive detections by identifying a patch as occupied if 2–25 wolves were detected by ≥3 hunters. Overall, estimates of the occupancy and distribution of wolf packs were generally consistent with known distributions. Our predictions of the total area occupied increased from 2007 to 2009 and predicted numbers of wolf packs were approximately 1.34–1.46 times the MFWP minimum counts for each year of the survey. Our results indicate that multi-season occupancy models based on public sightings can be used to monitor populations and changes in the spatial distribution of territorial carnivores across large areas where alternative methods may be limited by personnel, time, accessibility, and budget constraints.
Almeida, J Sanchez
2012-01-01
(Abridged) This paper explores the use of k-means clustering as a tool for automated unsupervised classification of massive stellar spectral catalogs. The classification criteria are defined by the data and the algorithm, with no prior physical framework. We work with a representative set of stellar spectra associated with the SDSS SEGUE and SEGUE-2 programs. We classify the original spectra as well as the spectra with the continuum removed. The second set only contains spectral lines, and it is less dependent on uncertainties of the flux calibration. The classification of the spectra with continuum renders 16 major classes. Roughly speaking, stars are split according to their colors, with enough finesse to distinguish dwarfs from giants of the same effective temperature, but with difficulties to separate stars with different metallicities. Overall, there is no one-to-one correspondence between the classes we derive and the MK types. The classification of spectra without continuum renders 13 classes, the colo...
Detection of the seasonal cyclic movement in historic buildings by means of surveying techniques
Directory of Open Access Journals (Sweden)
Valle-Melón, J. M.
2011-03-01
Full Text Available As in other engineering structures, historic buildings are conditioned by atmospheric changes which affect their size and shape. These effects follow a more or less cyclic pattern and do not normally put the stability of such buildings in jeopardy since they are part of their natural dynamics. Nevertheless, the study of these effects provides valuable information to understand the behavior of both the building and the materials it is made of.
This paper arose from the project of geometric monitoring of a presumably unstable historic building: the church of Santa María la Blanca in Agoncillo (La Rioja, Spain, which is being observed with conventional surveying equipment. The computations of the different epochs show several movements that can be explained as due to seasonal cycles.
Al igual que el resto de estructuras de ingeniería, los edificios históricos están sometidos a las variaciones de las condiciones atmosféricas que afectan a sus dimensiones. Estos efectos son de carácter cíclico y no suelen suponer riesgo para la estabilidad del edificio, ya que se encuentran dentro de su dinámica natural, sin embargo, su determinación aporta información valiosa a la hora de entender el comportamiento tanto del edificio como de los materiales que lo conforman.
Los resultados que se presentan surgen del proyecto de auscultación geométrica de un edificio histórico supuestamente inestable, la Iglesia de Santa María la Blanca de Agoncillo (La Rioja, España, que se viene realizando utilizando instrumentación topográfica convencional. En el cálculo de las diferentes campañas se han podido detectar movimientos cíclicos estacionales.
Risser, Dennis W.; Thompson, Ronald E.; Stuckey, Marla H.
2008-01-01
A method was developed for making estimates of long-term, mean annual ground-water recharge from streamflow data at 80 streamflow-gaging stations in Pennsylvania. The method relates mean annual base-flow yield derived from the streamflow data (as a proxy for recharge) to the climatic, geologic, hydrologic, and physiographic characteristics of the basins (basin characteristics) by use of a regression equation. Base-flow yield is the base flow of a stream divided by the drainage area of the basin, expressed in inches of water basinwide. Mean annual base-flow yield was computed for the period of available streamflow record at continuous streamflow-gaging stations by use of the computer program PART, which separates base flow from direct runoff on the streamflow hydrograph. Base flow provides a reasonable estimate of recharge for basins where streamflow is mostly unaffected by upstream regulation, diversion, or mining. Twenty-eight basin characteristics were included in the exploratory regression analysis as possible predictors of base-flow yield. Basin characteristics found to be statistically significant predictors of mean annual base-flow yield during 1971-2000 at the 95-percent confidence level were (1) mean annual precipitation, (2) average maximum daily temperature, (3) percentage of sand in the soil, (4) percentage of carbonate bedrock in the basin, and (5) stream channel slope. The equation for predicting recharge was developed using ordinary least-squares regression. The standard error of prediction for the equation on log-transformed data was 9.7 percent, and the coefficient of determination was 0.80. The equation can be used to predict long-term, mean annual recharge rates for ungaged basins, providing that the explanatory basin characteristics can be determined and that the underlying assumption is accepted that base-flow yield derived from PART is a reasonable estimate of ground-water recharge rates. For example, application of the equation for 370
Naesset, Erik; Gobakken, Terje; Bollandsas, Ole Martin; Gregoire, Timothy G.; Nelson, Ross; Stahl, Goeran
2013-01-01
Airborne scanning LiDAR (Light Detection and Ranging) has emerged as a promising tool to provide auxiliary data for sample surveys aiming at estimation of above-ground tree biomass (AGB), with potential applications in REDD forest monitoring. For larger geographical regions such as counties, states or nations, it is not feasible to collect airborne LiDAR data continuously ("wall-to-wall") over the entire area of interest. Two-stage cluster survey designs have therefore been demonstrated by which LiDAR data are collected along selected individual flight-lines treated as clusters and with ground plots sampled along these LiDAR swaths. Recently, analytical AGB estimators and associated variance estimators that quantify the sampling variability have been proposed. Empirical studies employing these estimators have shown a seemingly equal or even larger uncertainty of the AGB estimates obtained with extensive use of LiDAR data to support the estimation as compared to pure field-based estimates employing estimators appropriate under simple random sampling (SRS). However, comparison of uncertainty estimates under SRS and sophisticated two-stage designs is complicated by large differences in the designs and assumptions. In this study, probability-based principles to estimation and inference were followed. We assumed designs of a field sample and a LiDAR-assisted survey of Hedmark County (HC) (27,390 km2), Norway, considered to be more comparable than those assumed in previous studies. The field sample consisted of 659 systematically distributed National Forest Inventory (NFI) plots and the airborne scanning LiDAR data were collected along 53 parallel flight-lines flown over the NFI plots. We compared AGB estimates based on the field survey only assuming SRS against corresponding estimates assuming two-phase (double) sampling with LiDAR and employing model-assisted estimators. We also compared AGB estimates based on the field survey only assuming two-stage sampling (the NFI
The ALHAMBRA survey : Estimation of the clustering signal encoded in the cosmic variance
López-Sanjuan, C; Hernández-Monteagudo, C; Arnalte-Mur, P; Varela, J; Viironen, K; Fernández-Soto, A; Martínez, V J; Alfaro, E; Ascaso, B; del Olmo, A; Díaz-García, L A; Hurtado-Gil, Ll; Moles, M; Molino, A; Perea, J; Pović, M; Aguerri, J A L; Aparicio-Villegas, T; Benítez, N; Broadhurst, T; Cabrera-Caño, J; Castander, F J; Cepa, J; Cerviño, M; Cristóbal-Hornillos, D; Delgado, R M González; Husillos, C; Infante, L; Márquez, I; Masegosa, J; Prada, F; Quintana, J M
2015-01-01
The relative cosmic variance ($\\sigma_v$) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the $\\sigma_v$ measured in the ALHAMBRA survey. We measure the cosmic variance of several galaxy populations selected with $B-$band luminosity at $0.35 \\leq z < 1.05$ as the intrinsic dispersion in the number density distribution derived from the 48 ALHAMBRA subfields. We compare the observational $\\sigma_v$ with the cosmic variance of the dark matter expected from the theory, $\\sigma_{v,{\\rm dm}}$. This provides an estimation of the galaxy bias $b$. The galaxy bias from the cosmic variance is in excellent agreement with the bias estimated by two-point correlation function analysis in ALHAMBRA. This holds for different redshift bins, for red and blue subsamples, and for several ...
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey.
Ott, Michela; Pozzi, Francesca; Tavella, Mauro
This paper illustrates how the issue of "creativity raising" is currently tackled by teachers in Italy and what is, in their view, the potential role of ICT to support creativity development. By referring to the results of a small-scale survey conducted among Italian teachers, and starting from the meaning and value they attribute to the concept of "creativity", the paper provides an overview of teachers' prevailing attitudes towards the issue and reports on the kinds of actions they usually carry out within their own classes.
Stanton-Bandiero, M P
1998-01-01
This study is an extension of a qualitative study involving military nurses in World War II, Korea, Vietnam and Operation Desert Storm. Common themes and shared meanings identified in the previous qualitative study were investigated using a broad sample of military nurses who had served at various times and different branches of the service. The present investigation used a survey to gather data, and results tended to validate results of the earlier study that the experiences of military nurses in times of war tend to transcend many factors including time and branch of service.
The transition to early fatherhood: National estimates based on multiple surveys
Directory of Open Access Journals (Sweden)
H. Elizabeth Peters
2008-04-01
Full Text Available This study provides systematic information about the prevalence of early male fertility and the relationship between family background characteristics and early parenthood across three widely used data sources: the 1979 and 1997 National Longitudinal Surveys of Youth and the 2002 National Survey of Family Growth. We provide descriptive statistics on early fertility by age, sex, race, cohort, and data set. Because each data set includes birth cohorts with varying early fertility rates, prevalence estimates for early male fertility are relatively similar across data sets. Associations between background characteristics and early fertility in regression models are less consistent across data sets. We discuss the implications of these findings for scholars doing research on early male fertility.
Kumar, N Daniel
2008-01-01
Machine learning techniques are utilised in several areas of astrophysical research today. This dissertation addresses the application of ML techniques to two classes of problems in astrophysics, namely, the analysis of individual astronomical phenomena over time and the automated, simultaneous analysis of thousands of objects in large optical sky surveys. Specifically investigated are (1) techniques to approximate the precise orbits of the satellites of Jupiter and Saturn given Earth-based observations as well as (2) techniques to quickly estimate the distances of quasars observed in the Sloan Digital Sky Survey. Learning methods considered include genetic algorithms, particle swarm optimisation, artificial neural networks, and radial basis function networks. The first part of this dissertation demonstrates that GAs and PSOs can both be efficiently used to model functions that are highly non-linear in several dimensions. It is subsequently demonstrated in the second part that ANNs and RBFNs can be used as ef...
Energy Technology Data Exchange (ETDEWEB)
Hwang, H.-L.; Rollow, J.
2000-05-01
The 1995 American Travel Survey (ATS) collected information from approximately 80,000 U.S. households about their long distance travel (one-way trips of 100 miles or more) during the year of 1995. It is the most comprehensive survey of where, why, and how U.S. residents travel since 1977. ATS is a joint effort by the U.S. Department of Transportation (DOT) Bureau of Transportation Statistics (BTS) and the U.S. Department of Commerce Bureau of Census (Census); BTS provided the funding and supervision of the project, and Census selected the samples, conducted interviews, and processed the data. This report documents the technical support for the ATS provided by the Center for Transportation Analysis (CTA) in Oak Ridge National Laboratory (ORNL), which included the estimation of trip distances as well as data quality editing and checking of variables required for the distance calculations.
Montiel-Company, José María; Subirats-Roig, Cristian; Flores-Martí, Pau; Bellot-Arcís, Carlos; Almerich-Silla, José Manuel
2016-11-01
The aim of this study was to examine the validity and reliability of the Maslach Burnout Inventory-Human Services Survey (MBI-HSS) as a tool for assessing the prevalence and level of burnout in dental students in Spanish universities. The survey was adapted from English to Spanish. A sample of 533 dental students from 15 Spanish universities and a control group of 188 medical students self-administered the survey online, using the Google Drive service. The test-retest reliability or reproducibility showed an Intraclass Correlation Coefficient of 0.95. The internal consistency of the survey was 0.922. Testing the construct validity showed two components with an eigenvalue greater than 1.5, which explained 51.2% of the total variance. Factor I (36.6% of the variance) comprised the items that estimated emotional exhaustion and depersonalization. Factor II (14.6% of the variance) contained the items that estimated personal accomplishment. The cut-off point for the existence of burnout achieved a sensitivity of 92.2%, a specificity of 92.1%, and an area under the curve of 0.96. Comparison of the total dental students sample and the control group of medical students showed significantly higher burnout levels for the dental students (50.3% vs. 40.4%). In this study, the MBI-HSS was found to be viable, valid, and reliable for measuring burnout in dental students. Since the study also found that the dental students suffered from high levels of this syndrome, these results suggest the need for preventive burnout control programs.
Directory of Open Access Journals (Sweden)
Flávio Chaimowicz
Full Text Available The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil.We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815. Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%.The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations.
Directory of Open Access Journals (Sweden)
Padoin Cintia V
2009-02-01
Full Text Available Abstract Objective Children whose parents have psychiatric disorders experience an increased risk of developing psychiatric disorders, and have higher rates of developmental problems and mortality. Assessing the size of this population is important for planning of preventive strategies which target these children. Methods National survey data (CCHS 1.2 was used to estimate the number of children exposed to parental psychiatric disorders. Disorders were diagnosed using the World Psychiatric Health Composite International Diagnostic Interview (WMH-CIDI (12 month prevalence. Data on the number of children below 12 years of age in the home, and the relationship of the respondents with the children, was used to estimate exposure. Parent-child relations were identified, as was single parenthood. Using a design-based analysis, the number of children exposed to parental psychiatric disorders was calculated. Results Almost 570,000 children under 12 live in households where the survey respondent met criteria for one or more mood, anxiety or substance use disorders in the previous 12 months, corresponding to 12.1% of Canadian children under the age of 12. Almost 3/4 of these children have parents that report receiving no mental health care in the 12 months preceding the survey. For 17% of all Canadian children under age 12, the individual experiencing a psychiatric disorder is the only parent in the household. Conclusion The high number of children exposed causes major concern and has important implications. Although these children will not necessarily experience adversities, they possess an elevated risk of accidents, mortality, and of developing psychiatric disorders. We expect these estimates will promote further research and stimulate discussion at both health policy and planning tables.
Uncertainties estimation in surveying measurands: application to lengths, perimeters and areas
Covián, E.; Puente, V.; Casero, M.
2017-10-01
The present paper develops a series of methods for the estimation of uncertainty when measuring certain measurands of interest in surveying practice, such as points elevation given a planimetric position within a triangle mesh, 2D and 3D lengths (including perimeters enclosures), 2D areas (horizontal surfaces) and 3D areas (natural surfaces). The basis for the proposed methodology is the law of propagation of variance–covariance, which, applied to the corresponding model for each measurand, allows calculating the resulting uncertainty from known measurement errors. The methods are tested first in a small example, with a limited number of measurement points, and then in two real-life measurements. In addition, the proposed methods have been incorporated to commercial software used in the field of surveying engineering and focused on the creation of digital terrain models. The aim of this evolution is, firstly, to comply with the guidelines of the BIPM (Bureau International des Poids et Mesures), as the international reference agency in the field of metrology, in relation to the determination and expression of uncertainty; and secondly, to improve the quality of the measurement by indicating the uncertainty associated with a given level of confidence. The conceptual and mathematical developments for the uncertainty estimation in the aforementioned cases were conducted by researchers from the AssIST group at the University of Oviedo, eventually resulting in several different mathematical algorithms implemented in the form of MATLAB code. Based on these prototypes, technicians incorporated the referred functionality to commercial software, developed in C++. As a result of this collaboration, in early 2016 a new version of this commercial software was made available, which will be the first, as far as the authors are aware, that incorporates the possibility of estimating the uncertainty for a given level of confidence when computing the aforementioned surveying
Abrams, S M; Harpster, H W; Wangsness, P J; Shenk, J S; Keck, E; Rosenberger, J L
1987-06-01
Sixty test forages (alfalfa, timothy, bromegrass, and orchardgrass mixtures), of differing cuttings and maturities, were harvested as hay in each of 2 yr (30/yr) from three locations. Each of the 60 hays was chopped and fed to four growing sheep to determine voluntary intake. The duration of the trial was 2 yr with five experimental periods per year. In each period, immediately prior to feeding the test forages, intake of the same standard alfalfa hay (standard forage) was measured for every sheep. Use of intake of the standard forage as a covariate reduced mean square error by 38%. Regression of least squares means of intake of the test forages on chemical composition uniformly yielded higher coefficients of determination when means were generated from an analysis of variance that included intake of the standard forage as a covariate. This procedure can be used to increase the accuracy of estimates of mean voluntary intake or to reduce the number of animals needed to attain the same accuracy that would be achieved without use of the covariate.
Directory of Open Access Journals (Sweden)
Göran Ståhl
2016-02-01
Full Text Available This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where models play a core role: model-assisted, model-based, and hybrid estimation. The first two are well known, whereas the third has only recently been introduced in forest surveys. Hybrid inference mixes designbased and model-based inference, since it relies on a probability sample of auxiliary data and a model predicting the target variable from the auxiliary data..We review studies on large-area forest surveys based on model-assisted, modelbased, and hybrid estimation, and discuss advantages and disadvantages of the approaches. We conclude that no general recommendations can be made about whether model-assisted, model-based, or hybrid estimation should be preferred. The choice depends on the objective of the survey and the possibilities to acquire appropriate field and remotely sensed data. We also conclude that modelling approaches can only be successfully applied for estimating target variables such as growing stock volume or biomass, which are adequately related to commonly available remotely sensed data, and thus purely field based surveys remain important for several important forest parameters. Keywords: Design-based inference, Model-assisted estimation, Model-based inference, Hybrid inference, National forest inventory, Remote sensing, Sampling
Estimating Power Outage Cost based on a Survey for Industrial Customers
Yoshida, Yoshikuni; Matsuhashi, Ryuji
A survey was conducted on power outage cost for industrial customers. 5139 factories, which are designated energy management factories in Japan, answered their power consumption and the loss of production value due to the power outage in an hour in summer weekday. The median of unit cost of power outage of whole sectors is estimated as 672 yen/kWh. The sector of services for amusement and hobbies and the sector of manufacture of information and communication electronics equipment relatively have higher unit cost of power outage. Direct damage cost from power outage in whole sectors reaches 77 billion yen. Then utilizing input-output analysis, we estimated indirect damage cost that is caused by the repercussion of production halt. Indirect damage cost in whole sectors reaches 91 billion yen. The sector of wholesale and retail trade has the largest direct damage cost. The sector of manufacture of transportation equipment has the largest indirect damage cost.
Color-Redshift Relations and Photometric Redshift Estimations of Quasars in Large Sky Surveys
Institute of Scientific and Technical Information of China (English)
Xue-Bing Wu; Wei Zhang; Xu Zhou
2004-01-01
With a recently constructed composite quasar spectrum and the χ2 minimization technique, we describe a general method for estimating the photometric redshifts of a large sample of quasars by deriving theoretical color-redshift relations and comparing the theoretical colors with the observed ones. We estimated the photometric redshifts from the 5-band SDSS photometric data of 18678 quasars in the first major data release of SDSS and compared them with their spectroscopic redshifts. The difference is less than 0.1 for 47% of the quasars and less than 0.2for 68%. Based on the calculation of the theoretical color-color diagrams of stars,galaxies and quasars both on the SDSS system and on the BATC system, we expect that we would be able to select candidates of high redshift quasars more efficaciously with the latter than with the former, provided the BATC survey can detect objects with magnitudes fainter than 21.
Color-redshift Relations and Photometric Redshift Estimations of Quasars in Large Sky Surveys
Wu, X B; Zhou, X; Wu, Xue-Bing; Zhang, Wei; Zhou, Xu
2004-01-01
With a recently constructed composite quasar spectrum and the \\chi^2 minimization technique, we demonstrated a general method to estimate the photometric redshifts of a large sample of quasars by deriving the theoretical color-redshift relations and comparing the theoretical colors with the observed ones. We estimated the photometric redshifts from the 5-band SDSS photometric data of 18678 quasars in the first major data release of SDSS and compare them with the spectroscopic redshifts. The redshift difference is smaller than 0.1 for 47% of quasars and 0.2 for 68 % of them. Based on the calculation of the theoretical color-color diagrams of stars, galaxies and quasars in both the SDSS and BATC photometric systems, we expected that with the BATC system of 15 intermediate filters we would be able to select candidates of high redshift quasars more efficiently than in the SDSS, provided the BATC survey could detect objects with magnitude fainter than 21.
Directory of Open Access Journals (Sweden)
Danijel Nestić
2010-04-01
Full Text Available The aim of this paper is to estimate the size of, changes in, and main factors contributing to gender-based wage differentials in Croatia. It utilizes microdata from the Labor Force Surveys of 1998 and 2008 and applies both OLS and quantile regression techniques to assess the gender wage gap across the wage distribution. The average unadjusted gender wage gap is found to be relatively low and declining. This paper argues that employed women in Croatia possess higher-quality labor market characteristics than men, especially in terms of education, but receive much lower rewards for these characteristics. The Machado-Mata decomposition technique is used to estimate the gender wage gap as the sole effect of differing rewards. The results suggest that due to differing rewards the gap exceeds 20 percent on average - twice the size of the unadjusted gap - and that it increased somewhat between 1998 and 2008. The gap is found to be the highest at the lower-to-middle part of the wage distribution.
Abraham, Katharine G; Presser, Stanley; Helms, Sara
2009-01-01
The authors argue that both the large variability in survey estimates of volunteering and the fact that survey estimates do not show the secular decline common to other social capital measures are caused by the greater propensity of those who do volunteer work to respond to surveys. Analyses of the American Time Use Survey (ATUS)--the sample for which is drawn from the Current Population Survey (CPS)--together with the CPS volunteering supplement show that CPS respondents who become ATUS respondents report much more volunteering in the CPS than those who become ATUS nonrespondents. This difference is replicated within subgroups. Consequently, conventional adjustments for nonresponse cannot correct the bias. Although nonresponse leads to estimates of volunteer activity that are too high, it generally does not affect inferences about the characteristics of volunteers.
Directory of Open Access Journals (Sweden)
George W Williams
2014-10-01
Full Text Available The current practice of lowering mean arterial pressure (MAP during endoscopic sinus surgery (ESS is common, but unproven with regard to peer reviewed literature. The controlled hypotension induced is aimed for improved surgical field and lower the blood loss. Lower mean arterial pressures especially for prolonged surgeries may result in end organ hypoperfusion. The authors reviewed all patients who underwent outpatient endoscopic sinus surgery for the diagnosis of chronic sinusitis from January 1, 2012 to December 31, 2012 at Memorial Hermann Hospital – Texas Medical Centre. We individually reviewed case sheets of every patient and documented blood loss as recorded on the anaesthesia record or in the surgical procedure note, among other variables. A total of 326 patients were included in this study. The median estimated blood loss (EBL was found to be 50 ml. The multivariate regression analysis between these three groups showed that EBL was higher in MAP 75 group. The average of EBL in MAP75 group and the average of EBL in MAP 65-70 group is 42% higher than that in MAP>75 group when other variables were fixed. Hence we found the trend toward higher blood loss with lower MAP. The authors conclude that lower MAP does not result in lower EBL in endoscopic sinus surgery. Furthermore, increases in BMI and crystalloid administered during an aesthetic management of these cases correlates with increased estimate blood loss.
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
Bertrand-Krajewski, J L
2004-01-01
In order to replace traditional sampling and analysis techniques, turbidimeters can be used to estimate TSS concentration in sewers, by means of sensor and site specific empirical equations established by linear regression of on-site turbidity Tvalues with TSS concentrations C measured in corresponding samples. As the ordinary least-squares method is not able to account for measurement uncertainties in both T and C variables, an appropriate regression method is used to solve this difficulty and to evaluate correctly the uncertainty in TSS concentrations estimated from measured turbidity. The regression method is described, including detailed calculations of variances and covariance in the regression parameters. An example of application is given for a calibrated turbidimeter used in a combined sewer system, with data collected during three dry weather days. In order to show how the established regression could be used, an independent 24 hours long dry weather turbidity data series recorded at 2 min time interval is used, transformed into estimated TSS concentrations, and compared to TSS concentrations measured in samples. The comparison appears as satisfactory and suggests that turbidity measurements could replace traditional samples. Further developments, including wet weather periods and other types of sensors, are suggested.
Directory of Open Access Journals (Sweden)
Zheng Hui
2011-04-01
Full Text Available Abstract Background As many respiratory viruses are responsible for influenza like symptoms, accurate measures of the disease burden are not available and estimates are generally based on statistical methods. The objective of this study was to estimate absenteeism rates and hours lost due to seasonal influenza and compare these estimates with estimates of absenteeism attributable to the two H1N1 pandemic waves that occurred in 2009. Methods Key absenteeism variables were extracted from Statistics Canada's monthly labour force survey (LFS. Absenteeism and the proportion of hours lost due to own illness or disability were modelled as a function of trend, seasonality and proxy variables for influenza activity from 1998 to 2009. Results Hours lost due to the H1N1/09 pandemic strain were elevated compared to seasonal influenza, accounting for a loss of 0.2% of potential hours worked annually. In comparison, an estimated 0.08% of hours worked annually were lost due to seasonal influenza illnesses. Absenteeism rates due to influenza were estimated at 12% per year for seasonal influenza over the 1997/98 to 2008/09 seasons, and 13% for the two H1N1/09 pandemic waves. Employees who took time off due to a seasonal influenza infection took an average of 14 hours off. For the pandemic strain, the average absence was 25 hours. Conclusions This study confirms that absenteeism due to seasonal influenza has typically ranged from 5% to 20%, with higher rates associated with multiple circulating strains. Absenteeism rates for the 2009 pandemic were similar to those occurring for seasonal influenza. Employees took more time off due to the pandemic strain than was typical for seasonal influenza.
Cótchico, M. A.; Renee, L. K.; De Jongh, M. E.; Padron, E.; Hernandez Perez, P. A.; Perez, N. M.
2016-12-01
La Palma Island, the fifth longest (706 km2) and second highest (2,423 m asl) of the Canary Islands, is located at the northwestern end of the archipelago. Subaerial volcanic activity on La Palma started 2.0 My ago and has taken place exclusively at the southern part of the island during the last 123 ka, where Cumbre Vieja volcano, the most active basaltic volcano in the Canaries, has been constructed. Major volcano-structural and geomorphological features of Cumbre Vieja volcano are a north-south rift zone 20 km long, with vents located also at the northwest and northeast, and up to 1,950 m in elevation covering an área of 220 km2. Nowadays, there are no visible gas emissions from fumaroles or hot springs at Cumbre Vieja; therefore, diffuse CO2 degassing monitoring is important geochemical tool for its volcanic surveillance. Recent studies have shown that enhanced endogenous contributions of deep-seated CO2 might have been responsible for higher diffuse CO2 efflux values (Padrón et al., 2015). We report here the latest results of the diffuse CO2 emission survey at Cumbre Vieja volcano. The surface CO2 efflux measurements were taken using the accumulation chamber method in the period 1997- 2016 to evaluate their spatial distribution on this 220 km2 volcano and the diffuse CO2 emission rate from Cumbre Vieja volcano. Surface CO2 efflux values ranged from non-detectable up to 94 g m-2 d-1 in the last survey. Spatial distribution maps were constructed following the sequential Gaussian simulation (sGs) procedure. The spatial distribution of diffuse CO2 emission values did not seem to be controlled by the main structural features of the volcano since the highest values were measured in the southern part. The diffuse CO2 emission for the 2016 survey has been estimated about 739 ± 30 t d-1. The 2016 emission rate is slightly higher than the estimated average for Cumbre Vieja volcano (493 t d-1), but within the observed background range for this volcanic system over the
Crowe, S; Seal, A; Grijalva-Eternod, C.; Kerac, M
2014-01-01
Tackling childhood malnutrition is a global health priority. A key indicator is the estimated prevalence of malnutrition, measured by nutrition surveys. Most aspects of survey design are standardised, but data ‘cleaning criteria’ are not. These aim to exclude extreme values which may represent measurement or data-entry errors. The effect of different cleaning criteria on malnutrition prevalence estimates was unknown. We applied five commonly used data cleaning criteria (WHO 2006; EPI-Info; WH...
U.S. Geological Survey, Department of the Interior — This tabular data set represents the mean annual natural groundwater recharge, in millimeters, compiled for every MRB_E2RF1catchment of selected Major River Basins...
Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.
2014-01-01
Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are
Klughammer, Christof; Schreiber, Ulrich
2015-01-01
Theoretical prediction of effective mean PAR in optically dense samples is complicated by various optical effects, including light scattering and reflections. Direct information on the mean rate of photon absorption by PS II is provided by the kinetics of the fluorescence rise induced upon onset of strong actinic illumination (O-I1 rise). A recently introduced kinetic multi-color PAM fluorometer was applied to study the relationship between initial slope and cell density in the relatively simple model system of suspensions of Chlorella. Use of a curve fitting routine was made which was originally developed for assessment of the wavelength-dependent absorption cross-section of PS II, σ II(λ), in dilute suspensions. The model underlying analysis of the O-I1 rise kinetics is outlined and data on the relationship between fitted values of σ II(λ) and PAR in dilute samples are presented. With increasing cell density, lowering of apparent cross-section, (λ), with respect to σ II(λ), relates to a decrease of effective mean PAR, (λ), relative to incident PAR(λ). When ML and AL are applied in the same direction, the decline of (λ)/σ II(λ) with increasing optical density is less steep than that of the theoretically predicted (λ)/PAR(λ). It approaches a value of 0.5 when the same colors of ML and AL are used, in agreement with theory. These observations open the way for estimating mean PAR in optically dense samples via measurements of (λ)/σ II(λ)).
Directory of Open Access Journals (Sweden)
Robin B. Harris
2012-03-01
Full Text Available The Binational Arsenic Exposure Survey (BAsES was designed to evaluate probable arsenic exposures in selected areas of southern Arizona and northern Mexico, two regions with known elevated levels of arsenic in groundwater reserves. This paper describes the methodology of BAsES and the relationship between estimated arsenic intake from beverages and arsenic output in urine. Households from eight communities were selected for their varying groundwater arsenic concentrations in Arizona, USA and Sonora, Mexico. Adults responded to questionnaires and provided dietary information. A first morning urine void and water from all household drinking sources were collected. Associations between urinary arsenic concentration (total, organic, inorganic and estimated level of arsenic consumed from water and other beverages were evaluated through crude associations and by random effects models. Median estimated total arsenic intake from beverages among participants from Arizona communities ranged from 1.7 to 14.1 µg/day compared to 0.6 to 3.4 µg/day among those from Mexico communities. In contrast, median urinary inorganic arsenic concentrations were greatest among participants from Hermosillo, Mexico (6.2 µg/L whereas a high of 2.0 µg/L was found among participants from Ajo, Arizona. Estimated arsenic intake from drinking water was associated with urinary total arsenic concentration (p < 0.001, urinary inorganic arsenic concentration (p < 0.001, and urinary sum of species (p < 0.001. Urinary arsenic concentrations increased between 7% and 12% for each one percent increase in arsenic consumed from drinking water. Variability in arsenic intake from beverages and urinary arsenic output yielded counter intuitive results. Estimated intake of arsenic from all beverages was greatest among Arizonans yet participants in Mexico had higher urinary total and inorganic arsenic concentrations. Other contributors to urinary arsenic concentrations should be evaluated.
Roberge, Jason; O'Rourke, Mary Kay; Meza-Montenegro, Maria Mercedes; Gutiérrez-Millán, Luis Enrique; Burgess, Jefferey L; Harris, Robin B
2012-04-01
The Binational Arsenic Exposure Survey (BAsES) was designed to evaluate probable arsenic exposures in selected areas of southern Arizona and northern Mexico, two regions with known elevated levels of arsenic in groundwater reserves. This paper describes the methodology of BAsES and the relationship between estimated arsenic intake from beverages and arsenic output in urine. Households from eight communities were selected for their varying groundwater arsenic concentrations in Arizona, USA and Sonora, Mexico. Adults responded to questionnaires and provided dietary information. A first morning urine void and water from all household drinking sources were collected. Associations between urinary arsenic concentration (total, organic, inorganic) and estimated level of arsenic consumed from water and other beverages were evaluated through crude associations and by random effects models. Median estimated total arsenic intake from beverages among participants from Arizona communities ranged from 1.7 to 14.1 µg/day compared to 0.6 to 3.4 µg/day among those from Mexico communities. In contrast, median urinary inorganic arsenic concentrations were greatest among participants from Hermosillo, Mexico (6.2 µg/L) whereas a high of 2.0 µg/L was found among participants from Ajo, Arizona. Estimated arsenic intake from drinking water was associated with urinary total arsenic concentration (p < 0.001), urinary inorganic arsenic concentration (p < 0.001), and urinary sum of species (p < 0.001). Urinary arsenic concentrations increased between 7% and 12% for each one percent increase in arsenic consumed from drinking water. Variability in arsenic intake from beverages and urinary arsenic output yielded counter intuitive results. Estimated intake of arsenic from all beverages was greatest among Arizonans yet participants in Mexico had higher urinary total and inorganic arsenic concentrations. Other contributors to urinary arsenic concentrations should be evaluated.
Dose estimation based on a behavior survey of residents around the JCO facility.
Fujimoto, K; Yonehara, H; Yamaguchi, Y; Endo, A
2001-09-01
The NIRS staff interviewed the residents in the evacuated zone around the JCO facility in Tokai-mura on 19 and 20 November, 1999, to obtain the following parameters every 30 minutes starting from 10:35 A.M. on 30 September to 6:15 A.M. on 1 October: the distance from the precipitation tank, the type of the house, positions in the house, wall materials and their thickness in order to estimate individual doses due to the accident. The ambient dose equivalents were obtained based on monitoring data during the accident. In addition, computer calculations were conducted to evaluate the conversion factor from ambient dose equivalent to effective dose equivalent as well as the shielding effect of the house or factory to estimate the effective dose equivalent to the residents. The estimated individual doses based on the behavior survey were in the range from zero to 21 mSv. The individual doses were reported to the residents during the second visit to each house and factory at the end of January, 2000.
Zhang, Yu; Seo, Dong-Jun
2017-03-01
This paper presents novel formulations of Mean field bias (MFB) and local bias (LB) correction schemes that incorporate conditional bias (CB) penalty. These schemes are based on the operational MFB and LB algorithms in the National Weather Service (NWS) Multisensor Precipitation Estimator (MPE). By incorporating CB penalty in the cost function of exponential smoothers, we are able to derive augmented versions of recursive estimators of MFB and LB. Two extended versions of MFB algorithms are presented, one incorporating spatial variation of gauge locations only (MFB-L), and the second integrating both gauge locations and CB penalty (MFB-X). These two MFB schemes and the extended LB scheme (LB-X) are assessed relative to the original MFB and LB algorithms (referred to as MFB-O and LB-O, respectively) through a retrospective experiment over a radar domain in north-central Texas, and through a synthetic experiment over the Mid-Atlantic region. The outcome of the former experiment indicates that introducing the CB penalty to the MFB formulation leads to small, but consistent improvements in bias and CB, while its impacts on hourly correlation and Root Mean Square Error (RMSE) are mixed. Incorporating CB penalty in LB formulation tends to improve the RMSE at high rainfall thresholds, but its impacts on bias are also mixed. The synthetic experiment suggests that beneficial impacts are more conspicuous at low gauge density (9 per 58,000 km2), and tend to diminish at higher gauge density. The improvement at high rainfall intensity is partly an outcome of the conservativeness of the extended LB scheme. This conservativeness arises in part from the more frequent presence of negative eigenvalues in the extended covariance matrix which leads to no, or smaller incremental changes to the smoothed rainfall amounts.
Directory of Open Access Journals (Sweden)
C. Suresh Raju
2007-10-01
Full Text Available Estimation of precipitable water (PW in the atmosphere from ground-based Global Positioning System (GPS essentially involves modeling the zenith hydrostatic delay (ZHD in terms of surface Pressure (P_{s} and subtracting it from the corresponding values of zenith tropospheric delay (ZTD to estimate the zenith wet (non-hydrostatic delay (ZWD. This further involves establishing an appropriate model connecting PW and ZWD, which in its simplest case assumed to be similar to that of ZHD. But when the temperature variations are large, for the accurate estimate of PW the variation of the proportionality constant connecting PW and ZWD is to be accounted. For this a water vapor weighted mean temperature (T_{m} has been defined by many investigations, which has to be modeled on a regional basis. For estimating PW over the Indian region from GPS data, a region specific model for T_{m} in terms of surface temperature (T_{s} is developed using the radiosonde measurements from eight India Meteorological Department (IMD stations spread over the sub-continent within a latitude range of 8.5°–32.6° N. Following a similar procedure T_{m}-based models are also evolved for each of these stations and the features of these site-specific models are compared with those of the region-specific model. Applicability of the region-specific and site-specific T_{m}-based models in retrieving PW from GPS data recorded at the IGS sites Bangalore and Hyderabad, is tested by comparing the retrieved values of PW with those estimated from the altitude profile of water vapor measured using radiosonde. The values of ZWD estimated at 00:00 UTC and 12:00 UTC are used to test the validity of the models by estimating the PW using the models and comparing it with those obtained from radiosonde data. The region specific T_{m}-based model is found to be in par with if not better than a
Janmohamed, Amynah; Doledec, David
2017-07-01
To compare administrative coverage data with results from household coverage surveys for vitamin A supplementation (VAS) and deworming campaigns conducted during 2010-2015 in 12 African countries. Paired t-tests examined differences between administrative and survey coverage for 52 VAS and 34 deworming dyads. Independent t-tests measured VAS and deworming coverage differences between data sources for door-to-door and fixed-site delivery strategies and VAS coverage differences between 6- to 11-month and 12- to 59-month age group. For VAS, administrative coverage was higher than survey estimates in 47 of 52 (90%) campaign rounds, with a mean difference of 16.1% (95% CI: 9.5-22.7; P < 0.001). For deworming, administrative coverage exceeded survey estimates in 31 of 34 (91%) comparisons, with a mean difference of 29.8% (95% CI: 16.9-42.6; P < 0.001). Mean ± SD differences in coverage between administrative and survey data were 12.2% ± 22.5% for the door-to-door delivery strategy and 25.9% ± 24.7% for the fixed-site model (P = 0.06). For deworming, mean ± SD differences in coverage between data sources were 28.1% ± 43.5% and 33.1% ± 17.9% for door-to-door and fixed-site distribution, respectively (P = 0.64). VAS administrative coverage was higher than survey estimates in 37 of 49 (76%) comparisons for the 6- to 11-month age group and 45 of 48 (94%) comparisons for the 12- to 59-month age group. Reliance on health facility data alone for calculating VAS and deworming coverage may mask low coverage and prevent measures to improve programmes. Countries should periodically validate administrative coverage estimates with population-based methods. © 2017 John Wiley & Sons Ltd.
Lin, Feng; Chen, Xinguang
2010-02-01
In order to find better strategies for tobacco control, it is often critical to know the transitional probabilities among various stages of tobacco use. Traditionally, such probabilities are estimated by analyzing data from longitudinal surveys that are often time-consuming and expensive to conduct. Since cross-sectional surveys are much easier to conduct, it will be much more practical and useful to estimate transitional probabilities from cross-sectional survey data if possible. However, no previous research has attempted to do this. In this paper, we propose a method to estimate transitional probabilities from cross-sectional survey data. The method is novel and is based on a discrete event system framework. In particular, we introduce state probabilities and transitional probabilities to conventional discrete event system models. We derive various equations that can be used to estimate the transitional probabilities. We test the method using cross-sectional data of the National Survey on Drug Use and Health. The estimated transitional probabilities can be used in predicting the future smoking behavior for decision-making, planning and evaluation of various tobacco control programs. The method also allows a sensitivity analysis that can be used to find the most effective way of tobacco control. Since there are much more cross-sectional survey data in existence than longitudinal ones, the impact of this new method is expected to be significant.
Albano, Giuseppina; Giorno, Virginia; Román-Román, Patricia; Román-Román, Sergio; Torres-Ruiz, Francisco
2015-01-01
A modified Gompertz diffusion process is considered to model tumor dynamics. The infinitesimal mean of this process includes non-homogeneous terms describing the effect of therapy treatments able to modify the natural growth rate of the process. Specifically, therapies with an effect on cell growth and/or cell death are assumed to modify the birth and death parameters of the process. This paper proposes a methodology to estimate the time-dependent functions representing the effect of a therapy when one of the functions is known or can be previously estimated. This is the case of therapies that are jointly applied, when experimental data are available from either an untreated control group or from groups treated with single and combined therapies. Moreover, this procedure allows us to establish the nature (or, at least, the prevalent effect) of a single therapy in vivo. To accomplish this, we suggest a criterion based on the Kullback-Leibler divergence (or relative entropy). Some simulation studies are performed and an application to real data is presented.
Use of miniroutes and Breeding Bird Survey data to estimate abundance
Robbins, C.S.; Dowell, B.A.
1986-01-01
1. Information on relative abundance is easily obtained and adds greatly to the value of an atlas project. 2. The Breeding Bird Survey (BBS) provides annual counts (birds per 50 roadside stops) that can be used to: (1) map relative abundance by physiographic region within a state or province, (2) map relative abundance on a more local scale by using results from individual routes, or (3) compute estimates of total state populations of a species. Where BBS coverage is too scanty to permit mapping, extra temporary routes may be established to provide additional information for the atlas. Or, if continuing coverage is anticipated, additional permanent random routes can be assigned by the U. S. Fish and Wildlife Service. 3. Miniroutes of 15 or more stops can be established in individual atlas blocks to serve the dual purposes of providing efficient uniform coverage and providing information on relative abundance. Miniroutes can also be extracted from BBS routes to supplement special atlas coverage, or vice versa; but the data from the BBS will not be confined to individual atlas blocks. 4. Advantages of 15- or 20-stop Miniroutes over 25-stop Miniroutes are several: the ability to do two per morning and the lower variability among M1niroute results. Also, many 5-km atlas blocks do not have enough secondary roads to accommodate 25 stops at one-half mile intervals. Disadvantages of 15-stop Miniroutes starting at sunrise are the smaller numbers of birds recorded, missing of the very productive dawn chorus period (Robbins 1981), and missing crepuscular species (rails, woodcock, owls, and goatsuckers). 5. Advantages of recording counts of individuals rather than checking only species presence at Miniroute stops are that: (1) relative abundance can be mapped rather than frequency only (a measure of frequency is already available in the number of blocks recording each species); (2) population change can be measured over a period of years when the next atlas is made; and (3
Höing, Andrea; Quinten, Marcel C; Indrawati, Yohana Maria; Cheyne, Susan M; Waltert, Matthias
2013-02-01
Estimating population densities of key species is crucial for many conservation programs. Density estimates provide baseline data and enable monitoring of population size. Several different survey methods are available, and the choice of method depends on the species and study aims. Few studies have compared the accuracy and efficiency of different survey methods for large mammals, particularly for primates. Here we compare estimates of density and abundance of Kloss' gibbons (Hylobates klossii) using two of the most common survey methods: line transect distance sampling and triangulation. Line transect surveys (survey effort: 155.5 km) produced a total of 101 auditory and visual encounters and a density estimate of 5.5 gibbon clusters (groups or subgroups of primate social units)/km(2). Triangulation conducted from 12 listening posts during the same period revealed a similar density estimate of 5.0 clusters/km(2). Coefficients of variation of cluster density estimates were slightly higher from triangulation (0.24) than from line transects (0.17), resulting in a lack of precision in detecting changes in cluster densities of triangulation and triangulation method also may be appropriate.
Hicke, J. A.
2005-12-01
Downwelling surface solar radiation is an important factor driving plant productivity, and clouds and aerosols are major factors responsible for interannual variability in downwelling radiation. Global ecosystem models require spatially extensive data sets that vary interannually to capture effects that potentially drive changes in ecosystem function. Representative global solar radiation data sets include National Centers for Environmental Prediction (NCEP) reanalyses and Goddard Institute for Space Studies (GISS) calculations that included satellite observations of cloud properties. The CASA light-use efficiency model, which utilizes solar radiation and satellite-derived vegetation information, was run with the two solar radiation data sets to explore how differences affect estimated net primary production (NPP). Mean global NCEP solar radiation exceeded that from GISS by 16%, likely as a result of lower cloudiness within the NCEP reanalyses compared to satellite observations. Neither data set resulted in a significant trend in growing season radiation over the study period (1984-2000). Locally, relative differences were up to 40% in the mean and 10% in the trend of solar radiation and NPP, and varied in sign across the globe.
Hanna, Joseph; Cordery, Damien V; Steel, David G; Davis, Walter; Harrold, Timothy C
2017-04-20
Over the past decade, there have been substantial changes in landline and mobile phone ownership, with a substantial increase in the proportion of mobile-only households. Estimates of daily smoking rates for the mobile phone only (MPO) population have been found to be substantially higher than the rest of the population and telephone surveys that use a dual sampling frame (landline and mobile phones) are now considered best practice. Smoking is seen as an undesirable behaviour; measuring such behaviours using an interviewer may lead to lower estimates when using telephone based surveys compared to self-administered approaches. This study aims to assess whether higher daily smoking estimates observed for the mobile phone only population can be explained by administrative features of surveys, after accounting for differences in the phone ownership population groups. Data on New South Wales (NSW) residents aged 18 years or older from the NSW Population Health Survey (PHS), a telephone survey, and the National Drug Strategy Household Survey (NDSHS), a self-administered survey, were combined, with weights adjusted to match the 2013 population. Design-adjusted prevalence estimates and odds ratios were calculated using survey analysis procedures available in SAS 9.4. Both the PHS and NDSHS gave the same estimates for daily smoking (12%) and similar estimates for MPO users (20% and 18% respectively). Pooled data showed that daily smoking was 19% for MPO users, compared to 10% for dual phone owners, and 12% for landline phone only users. Prevalence estimates for MPO users across both surveys were consistently higher than other phone ownership groups. Differences in estimates for the MPO population compared to other phone ownership groups persisted even after adjustment for the mode of collection and demographic factors. Daily smoking rates were consistently higher for the mobile phone only population and this was not driven by the mode of survey collection. This supports
An emperor penguin population estimate: the first global, synoptic survey of a species from space.
Directory of Open Access Journals (Sweden)
Peter T Fretwell
Full Text Available Our aim was to estimate the population of emperor penguins (Aptenodytes fosteri using a single synoptic survey. We examined the whole continental coastline of Antarctica using a combination of medium resolution and Very High Resolution (VHR satellite imagery to identify emperor penguin colony locations. Where colonies were identified, VHR imagery was obtained in the 2009 breeding season. The remotely-sensed images were then analysed using a supervised classification method to separate penguins from snow, shadow and guano. Actual counts of penguins from eleven ground truthing sites were used to convert these classified areas into numbers of penguins using a robust regression algorithm.We found four new colonies and confirmed the location of three previously suspected sites giving a total number of emperor penguin breeding colonies of 46. We estimated the breeding population of emperor penguins at each colony during 2009 and provide a population estimate of ~238,000 breeding pairs (compared with the last previously published count of 135,000-175,000 pairs. Based on published values of the relationship between breeders and non-breeders, this translates to a total population of ~595,000 adult birds.There is a growing consensus in the literature that global and regional emperor penguin populations will be affected by changing climate, a driver thought to be critical to their future survival. However, a complete understanding is severely limited by the lack of detailed knowledge about much of their ecology, and importantly a poor understanding of their total breeding population. To address the second of these issues, our work now provides a comprehensive estimate of the total breeding population that can be used in future population models and will provide a baseline for long-term research.
An emperor penguin population estimate: the first global, synoptic survey of a species from space.
Fretwell, Peter T; Larue, Michelle A; Morin, Paul; Kooyman, Gerald L; Wienecke, Barbara; Ratcliffe, Norman; Fox, Adrian J; Fleming, Andrew H; Porter, Claire; Trathan, Phil N
2012-01-01
Our aim was to estimate the population of emperor penguins (Aptenodytes fosteri) using a single synoptic survey. We examined the whole continental coastline of Antarctica using a combination of medium resolution and Very High Resolution (VHR) satellite imagery to identify emperor penguin colony locations. Where colonies were identified, VHR imagery was obtained in the 2009 breeding season. The remotely-sensed images were then analysed using a supervised classification method to separate penguins from snow, shadow and guano. Actual counts of penguins from eleven ground truthing sites were used to convert these classified areas into numbers of penguins using a robust regression algorithm.We found four new colonies and confirmed the location of three previously suspected sites giving a total number of emperor penguin breeding colonies of 46. We estimated the breeding population of emperor penguins at each colony during 2009 and provide a population estimate of ~238,000 breeding pairs (compared with the last previously published count of 135,000-175,000 pairs). Based on published values of the relationship between breeders and non-breeders, this translates to a total population of ~595,000 adult birds.There is a growing consensus in the literature that global and regional emperor penguin populations will be affected by changing climate, a driver thought to be critical to their future survival. However, a complete understanding is severely limited by the lack of detailed knowledge about much of their ecology, and importantly a poor understanding of their total breeding population. To address the second of these issues, our work now provides a comprehensive estimate of the total breeding population that can be used in future population models and will provide a baseline for long-term research.
Mean-annual erosion potential for Colorado and New Mexico
U.S. Geological Survey, Department of the Interior — The U.S. Geological Survey Data Series provides raster data representing an estimate of the mean-annual erosion potential of a 30-meter raster cell in Colorado and...
Pearson, E.; Smith, M. W.; Klaar, M. J.; Brown, L. E.
2017-09-01
High resolution topographic surveys such as those provided by Structure-from-Motion (SfM) contain a wealth of information that is not always exploited in the generation of Digital Elevation Models (DEMs). In particular, several authors have related sub-metre scale topographic variability (or 'surface roughness') to sediment grain size by deriving empirical relationships between the two. In fluvial applications, such relationships permit rapid analysis of the spatial distribution of grain size over entire river reaches, providing improved data to drive three-dimensional hydraulic models, allowing rapid geomorphic monitoring of sub-reach river restoration projects, and enabling more robust characterisation of riverbed habitats. However, comparison of previously published roughness-grain-size relationships shows substantial variability between field sites. Using a combination of over 300 laboratory and field-based SfM surveys, we demonstrate the influence of inherent survey error, irregularity of natural gravels, particle shape, grain packing structure, sorting, and form roughness on roughness-grain-size relationships. Roughness analysis from SfM datasets can accurately predict the diameter of smooth hemispheres, though natural, irregular gravels result in a higher roughness value for a given diameter and different grain shapes yield different relationships. A suite of empirical relationships is presented as a decision tree which improves predictions of grain size. By accounting for differences in patch facies, large improvements in D50 prediction are possible. SfM is capable of providing accurate grain size estimates, although further refinement is needed for poorly sorted gravel patches, for which c-axis percentiles are better predicted than b-axis percentiles.
Gibson, Dustin G; Pereira, Amanda; Farrenkopf, Brooke A; Labrique, Alain B; Pariyo, George W; Hyder, Adnan A
2017-05-05
National and subnational level surveys are important for monitoring disease burden, prioritizing resource allocation, and evaluating public health policies. As mobile phone access and ownership become more common globally, mobile phone surveys (MPSs) offer an opportunity to supplement traditional public health household surveys. The objective of this study was to systematically review the current landscape of MPSs to collect population-level estimates in low- and middle-income countries (LMICs). Primary and gray literature from 7 online databases were systematically searched for studies that deployed MPSs to collect population-level estimates. Titles and abstracts were screened on primary inclusion and exclusion criteria by two research assistants. Articles that met primary screening requirements were read in full and screened for secondary eligibility criteria. Articles included in review were grouped into the following three categories by their survey modality: (1) interactive voice response (IVR), (2) short message service (SMS), and (3) human operator or computer-assisted telephone interviews (CATI). Data were abstracted by two research assistants. The conduct and reporting of the review conformed to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. A total of 6625 articles were identified through the literature review. Overall, 11 articles were identified that contained 19 MPS (CATI, IVR, or SMS) surveys to collect population-level estimates across a range of topics. MPSs were used in Latin America (n=8), the Middle East (n=1), South Asia (n=2), and sub-Saharan Africa (n=8). Nine articles presented results for 10 CATI surveys (10/19, 53%). Two articles discussed the findings of 6 IVR surveys (6/19, 32%). Three SMS surveys were identified from 2 articles (3/19, 16%). Approximately 63% (12/19) of MPS were delivered to mobile phone numbers collected from previously administered household surveys. The majority of MPS (11
A catalog of bulge, disk, and total stellar mass estimates for the Sloan Digital Sky Survey
Mendel, J Trevor; Palmer, Michael; Ellison, Sara L; Patton, David R
2013-01-01
We present a catalog of bulge, disk, and total stellar mass estimates for ~660,000 galaxies in the Legacy area of the Sloan Digital Sky Survey Data Release 7. These masses are based on a homogeneous catalog of g- and r-band photometry described by Simard et al. (2011), which we extend here with bulge+disk and Sersic profile photometric decompositions in the SDSS u, i, and z bands. We discuss the methodology used to derive stellar masses from these data via fitting to broadband spectral energy distributions (SEDs), and show that the typical statistical uncertainty on total, bulge, and disk stellar mass is ~0.15 dex. Despite relatively small formal uncertainties, we argue that SED modeling assumptions, including the choice of synthesis model, extinction law, initial mass function, and details of stellar evolution likely contribute an additional 60% systematic uncertainty in any mass estimate based on broadband SED fitting. We discuss several approaches for identifying genuine bulge+disk systems based on both th...
Directory of Open Access Journals (Sweden)
Housila P. Singh
2013-05-01
Full Text Available In this paper a double (or two-phase sampling version of (Singh and Tailor, 2005 estimator has been suggested along with its properties under large sample approximation. It is shown that the estimator due to (Kawathekar and Ajgaonkar, 1984 is a member of the proposed class of estimators. Realistic conditions have been obtained under which the proposed estimator is better than usual unbiased estimator, usual double sampling ratio ( tRd product ( tPd estimators and (Kawathekar and Ajgaonkar, 1984 estimator. This fact has been shown also through an empirical study.
Granato, Gregory E.
2009-01-01
Research Council, 2004). The USGS maintains the National Water Information System (NWIS), a distributed network of computers and file servers used to store and retrieve hydrologic data (Mathey, 1998; U.S. Geological Survey, 2008). NWISWeb is an online version of this database that includes water data from more than 24,000 streamflow-gaging stations throughout the United States (U.S. Geological Survey, 2002, 2008). Information from NWISWeb is commonly used to characterize streamflows at gaged sites and to help predict streamflows at ungaged sites. Five computer programs were developed for obtaining and analyzing streamflow from the National Water Information System (NWISWeb). The programs were developed as part of a study by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, to develop a stochastic empirical loading and dilution model. The programs were developed because reliable, efficient, and repeatable methods are needed to access and process streamflow information and data. The first program is designed to facilitate the downloading and reformatting of NWISWeb streamflow data. The second program is designed to facilitate graphical analysis of streamflow data. The third program is designed to facilitate streamflow-record extension and augmentation to help develop long-term statistical estimates for sites with limited data. The fourth program is designed to facilitate statistical analysis of streamflow data. The fifth program is a preprocessor to create batch input files for the U.S. Environmental Protection Agency DFLOW3 program for calculating low-flow statistics. These computer programs were developed to facilitate the analysis of daily mean streamflow data for planning-level water-quality analyses but also are useful for many other applications pertaining to streamflow data and statistics. These programs and the associated documentation are included on the CD-ROM accompanying this report. This report and the appendixes on the
Directory of Open Access Journals (Sweden)
Jan Grzybek
2014-08-01
Full Text Available The content of lead, cadmium, and nickel in dry fruit bodies of 34 species of macromyoetes collected in Poland from 72 natural babitats by means of Atomic Absorption Spectroscopy (AAS was estimated.
Energy Technology Data Exchange (ETDEWEB)
Raspor, Biserka; Dragun, Zrinka; Erk, Marijana; Ivankovic, Dusica; Pavicic, Jasenka
2004-10-15
A study performed over 12 months with caged mussels Mytilus galloprovincialis in the coastal marine zone, which is under urban pressure, reveals a temporal variation of digestive gland mass, which causes 'biological dilution' of cytosolic metallothionein (MT) and trace metal (Cd, Cu, Zn, Fe, Mn) concentrations. The dilution effect was corrected by expressing the cytosolic MT and metal concentrations as the tissue content. Consequently, the changes of the average digestive gland mass coincide with the changes of MT and trace metal contents. From February to June, MT contents are nearly twice and trace metal contents nearly three times higher than those of the other months. The period of increased average digestive gland mass, of MT and trace metal contents probably overlaps with the sexual maturation of mussels (gametogenesis) and enhanced food availability. Since natural factors contribute more to the MT content than the sublethal levels of Cd, the digestive gland of M. galloprovincialis is not considered as a tissue of choice for estimating Cd exposure by means of MTs.
Goffe, Louis; Rushton, Stephen; White, Martin; Adamson, Ashley; Adams, Jean
2017-09-22
Out-of-home meals have been characterised as delivering excessively large portions that can lead to high energy intake. Regular consumption is linked to weight gain and diet related diseases. Consumption of out-of-home meals is associated with socio-demographic and anthropometric factors, but the relationship between habitual consumption of such meals and mean daily energy intake has not been studied in both adults and children in the UK. We analysed adult and child data from waves 1-4 of the UK National Diet and Nutrition Survey using generalized linear modelling. We investigated whether individuals who report a higher habitual consumption of meals out in a restaurant or café, or takeaway meals at home had a higher mean daily energy intake, as estimated by a four-day food diary, whilst adjusting for key socio-demographic and anthropometric variables. Adults who ate meals out at least weekly had a higher mean daily energy intake consuming 75-104 kcal more per day than those who ate these meals rarely. The equivalent figures for takeaway meals at home were 63-87 kcal. There was no association between energy intake and frequency of consumption of meals out in children. Children who ate takeaway meals at home at least weekly consumed 55-168 kcal more per day than those who ate these meals rarely. Additionally, in children, there was an interaction with socio-economic position, where greater frequency of consumption of takeaway meals was associated with higher mean daily energy intake in those from less affluent households than those from more affluent households. Higher habitual consumption of out-of-home meals is associated with greater mean daily energy intake in the UK. More frequent takeaway meal consumption in adults and children is associated with greater daily energy intake and this effect is greater in children from less affluent households. Interventions seeking to reduce energy content through reformulation or reduction of portion sizes in restaurants
Directory of Open Access Journals (Sweden)
Artur Andriolo
2005-09-01
Full Text Available The objective was to estimate abundance of marsh deer in the Paraná River basin of this work. The results provided information to support further analysis of the impact of the Porto Primavera flooding lake over population. Sixty-nine animals were recorded by aerial survey using distance sampling methodology. Animals were widely distributed throughout the study area. The uncorrected data resulted in a estimate density of 0.0035 ind/ha and a population size of 636 individuals. Correcting the g for the animals that could be missed the calculated abundance was 896 (CV=0.27 individuals. This methodology was applied with success to survey marsh deer. The result was important to evaluate the marsh deer status in the area, and for future analysis of the impact of the flooding dam.A população de cervo-do-pantanal (Blastocerus dichotomus está drasticamente reduzida no Brasil. O nosso objetivo foi o de estimar a abundância do cervo-do-pantanal na bacia do Rio Paraná e discutir a metodologia aplicada. Os resultados darão suporte para uma análise do impacto do enchimento da represa de Porto Primavera sobre essa população. Sessenta e nove animais foram registrados através de sobrevôo utilizando-se a metodologia de transecção linear com amostragem das distâncias. Os dados não corrigidos resultaram em uma densidade estimada de 0,0035ind/ha e uma população de 636 indivíduos. A correção de g para os animais que não foram vistos apresentou uma densidade de 0,0049 ind/ha e uma abundância de 896 (CV=0,27 indivíduos. A metodologia foi aplicada com sucesso na estimativa de cervo-do-pantanal. Esse resultado é importante para avaliarmos a população do cervo-do-pantanal na área e para futuramente analisarmos o impacto do enchimento da represa.
Yamanaka, Yusuke; Tanioka, Yuichiro
2017-08-01
Large sector collapses and landslides have the potential to cause significant disasters. Estimating the topography and conditions, such as volume, before the collapse is thus important for analyzing the behavior of moving collapsed material and hazard risks. This study considers three historical volcanic sector collapses in Japan that caused tsunamis: the collapses of the Komagatake Volcano in 1640, Oshima-Oshima Island in 1741, and Unzen-Mayuyama Volcano in 1792. Numerical simulations of the tsunamis generated by each event were first carried out based on assumed collapse scenarios. The primary objective of this study is to present conditions related to the topography before the events based on inverse models of the topography from those results and tsunami survey data. The Oshima-Oshima Tsunami, which is the subject of many previous studies, was first simulated to validate the model accuracy and evaluate how run-up heights changed during the simulation as the topographic conditions changed. The run-up height was especially sensitive to the collapsed volume and frictional acceleration affecting the collapsed material; however, the observed run-up heights could be reproduced with high accuracy using proper conditions of frictional acceleration for the scenarios, even if they were not exact. A minimum requirement for the collapsed volume to generate the observed run-up height was introduced and quantitatively evaluated using the results of numerical tsunami simulations. The minimum volumes of the collapses of the Komagatake and Unzen-Mayuyama volcanoes were estimated to be approximately 1.2 and 0.3 km3, respectively.
Estimating flood discharge using witness movies in post-flood hydrological surveys
Le Coz, Jérôme; Hauet, Alexandre; Le Boursicaud, Raphaël; Pénard, Lionel; Bonnifait, Laurent; Dramais, Guillaume; Thollet, Fabien; Braud, Isabelle
2015-04-01
The estimation of streamflow rates based on post-flood surveys is of paramount importance for the investigation of extreme hydrological events. Major uncertainties usually arise from the absence of information on the flow velocities and from the limited spatio-temporal resolution of such surveys. Nowadays, after each flood occuring in populated areas home movies taken from bridges, river banks or even drones are shared by witnesses through Internet platforms like YouTube. Provided that some topography data and additional information are collected, image-based velocimetry techniques can be applied to some of these movie materials, in order to estimate flood discharges. As a contribution to recent post-flood surveys conducted in France, we developed and applied a method for estimating velocities and discharges based on the Large Scale Particle Image Velocimetry (LSPIV) technique. Since the seminal work of Fujita et al. (1998), LSPIV applications to river flows were reported by a number of authors and LSPIV can now be considered a mature technique. However, its application to non-professional movies taken by flood witnesses remains challenging and required some practical developments. The different steps to apply LSPIV analysis to a flood home movie are as follows: (i) select a video of interest; (ii) contact the author for agreement and extra information; (iii) conduct a field topography campaign to georeference Ground Control Points (GCPs), water level and cross-sectional profiles; (iv) preprocess the video before LSPIV analysis: correct lens distortion, align the images, etc.; (v) orthorectify the images to correct perspective effects and know the physical size of pixels; (vi) proceed with the LSPIV analysis to compute the surface velocity field; and (vii) compute discharge according to a user-defined velocity coefficient. Two case studies in French mountainous rivers during extreme floods are presented. The movies were collected on YouTube and field topography
Wardrop, Nicola A.; Dzodzomenyo, Mawuli; Aryeetey, Genevieve; Hill, Allan G.; Bain, Robert E. S.; Wright, Jim
2017-08-01
Packaged water consumption is growing in low- and middle-income countries, but the magnitude of this phenomenon and its environmental consequences remain unclear. This study aims to quantify both the volumes of packaged water consumed relative to household water requirements and associated plastic waste generated for three West African case study countries. Data from household expenditure surveys for Ghana, Nigeria and Liberia were used to estimate the volumes of packaged water consumed and thereby quantify plastic waste generated in households with and without solid waste disposal facilities. In Ghana, Nigeria and Liberia respectively, 11.3 (95% confidence interval: 10.3-12.4), 10.1 (7.5-12.5), and 0.38 (0.31-0.45) Ml day-1 of sachet water were consumed. This generated over 28 000 tonnes yr-1 of plastic waste, of which 20%, 63% and 57% was among households lacking formal waste disposal facilities in Ghana, Nigeria and Liberia respectively. Reported packaged water consumption provided sufficient water to meet daily household drinking-water requirements for 8.4%, less than 1% and 1.6% of households in Ghana, Nigeria and Liberia respectively. These findings quantify packaged water’s contribution to household water needs in our study countries, particularly Ghana, but indicate significant subsequent environmental repercussions.
Estimativas obtidas de um levantamento complexo Estimates from a complex survey
Directory of Open Access Journals (Sweden)
Maria Helena de Sousa
2003-10-01
Full Text Available OBJETIVO: Avaliar o impacto do plano de amostragem e o efeito da ponderação, em dados provenientes da "Pesquisa Nacional sobre Demografia e Saúde" (PNDS-96. MÉTODOS: Análise de dados secundários, realizada para a amostra do Estado de São Paulo, com 1.355 mulheres entrevistadas. Tomou-se como referência o plano de amostragem da "Pesquisa Nacional por Amostra de Domicílios" (PNAD, com o município como unidade primária de amostragem. O estimador razão e a aproximação de Taylor para a variância foram calculados sobre as unidades primárias de amostragem e sobre diversas modalidades de ponderação. Intervalos de confiança, efeitos do desenho (Deff e vícios foram os indicadores utilizados para avaliar precisão e validade. RESULTADOS: Para os quatro procedimentos, as diferenças da maior estimativa pontual de prevalência, em relação à menor, não ultrapassaram 10%. Quanto às amplitudes dos intervalos de confiança, as diferenças foram inferiores a 20%. Uso de camisinha e de injetável foram as variáveis que tiveram efeitos do delineamento superiores a 1,5 e vícios superiores a 0,20. CONCLUSÕES: A amostragem por conglomerados teve impacto sobre a precisão das estimativas, em duas das seis variáveis. Quanto à ponderação, não houve grande impacto sobre as estimativas.OBJECTIVE: To evaluate the impact of sampling design and the effect of weighting on data from the 1996 Brazilian National Survey on Demography and Health. METHODS: Secondary data analysis was performed using a sample of 1,355 interviewed women of the state of São Paulo. The sampling design of the National Survey of Household Sampling (PNAD was used as a reference, and the municipality as primary sampling unit. The ratio estimator and Taylor's aproximation for variance were calculated using the primary sampling units and several modalities of weighting. The indicators used to evaluate precision and validity were confidence intervals, design effects (Deff and
Snell, Tom; Knapp, Martin; Healey, Andrew; Guglani, Sacha; Evans-Lacko, Sara; Fernandez, Jose-Luis; Meltzer, Howard; Ford, Tamsin
2013-01-01
Background: Approximately one in ten children aged 5-15 in Britain has a conduct, hyperactivity or emotional disorder. Methods: The British Child and Adolescent Mental Health Surveys (BCAMHS) identified children aged 5-15 with a psychiatric disorder, and their use of health, education and social care services. Service costs were estimated for each…
DEFF Research Database (Denmark)
Jørgensen, Ole A.
while the minimum mesh size in the cod end in the commercial trawls is 140 mm and the survey catches are converted to potential commercial by catches. The conversion is based on a number of assumptions and the results should be considered as indicative. The total by-catch in weight is estimated to be 13...
Johnson, Andrew O.; Mink, Michael D.; Harun, Nusrat; Moore, Charity G.; Martin, Amy B.; Bennett, Kevin J.
2008-01-01
Objectives: The purpose of this study was to compare national estimates of drug use and exposure to violence between rural and urban teens. Methods: Twenty-eight dependent variables from the 2003 Youth Risk Behavior Survey were used to compare violent activities, victimization, suicidal behavior, tobacco use, alcohol use, and illegal drug use…
Qing, Siyu
2014-01-01
The National Science Foundation (NSF) Survey of Doctorate Recipients (SDR) collects information on a sample of individuals in the United States with PhD degrees. A significant portion of the sampled individuals appear in multiple survey years and can be linked across time. Survey weights in each year are created and adjusted for oversampling and…
Using aerial surveys to estimate density and distribution of harbour porpoises in Dutch waters
Scheidat, M.; Verdaat, J.P.; Aarts, G.M.
2012-01-01
To investigate harbour porpoise density and distribution in Dutch waters, dedicated line transect distance sampling aerial surveys were conducted from May 2008 to March 2010. In total 10,557 km were covered on survey effort during 16 survey days in February to May, August, November and December. Usi
Bourdon, K H; Rae, D S; Locke, B Z; Narrow, W E; Regier, D A
1992-01-01
The National Institute of Mental Health Epidemiologic Catchment Area Survey is a comprehensive, community-based survey of mental disorders and use of services by adults, ages 18 and older. Diagnoses are based on the criteria in the "Diagnostic and Statistical Manual of Mental Disorders," third edition, and were obtained in five communities in the United States through lay-interviewer administration of the National Institute of Mental Health Diagnostic Interview Schedule. Results from the survey provide the public health field with data on the prevalence and incidence of specific mental disorders in the community, unbiased by the treatment status of the sample. The population with disorders is estimated, and the survey findings that respond to some of the most common requests for information about the epidemiology of mental disorders in the United States are highlighted briefly. Based on the survey, it is estimated that one of every five persons in the United States suffers from a mental disorder in any 6-month period, and that one of every three persons suffers a disorder in his or her lifetime. Fewer than 20 percent of those with a recent mental disorder seek help for their problem, according to the survey. High rates of comorbid substance abuse and mental disorders were found, particularly among those who had sought treatment for their disorders.
Mekki, Insaf; Jaiez, Zeineb; Jacob, Frédéric
2014-05-01
Soil water content (SWC) is an important driver for number of soil, water and energy fluxes at different temporal and spatial scales. The non-invasive electromagnetic induction sensor, such as EM38, that measures the soil apparent electrical conductivity (ECa), has been widely used to infer spatial and temporal patterns of soil properties. The objective of this study has been to explore the opportunity for estimating and mapping the soil water content (SWC) based on in-situ data collected in different fields and during dry and wet soil conditions in a hilly landscape. The experiment was carried out during two campaigns under dry and wet conditions to represent the major soil association, land use and topographic attributes at the cultivated semiarid Mediterranean Lebna catchment, northeastern Tunisia. The temporal evolution of SWC is a dry-wet-dry pattern. Gravimetric soil water content sampling and ECa measured with EM38 (Geonics Ltd., Ontario, Canada) surveys have been performed simultaneously. ECa measurements, geo-referenced with GPS, were collected raising the EM38 to sample at various depths of the soil. The EM38 was placed in both horizontal and vertical dipole modes on a PVC stand 150 cm above the soil surface. The number of investigated points varied between n=70 in February to n=38 in October 2012. Results showed that different SWC related to the soil spatial variability and lead to differences in ECa averaged values and a substantial changes in the ECa as SWC changed. The relationship between SWC an ECa in a separate vertical and horizontal mode using all possible sets of surveys was tested with linear regression. The correlation coefficient between ECa and SWC for the horizontal mode was lower than the vertical mode. Coefficients of determination of linear regressions between SWC in 0-100 cm soil depth and ECa in the vertical mode were, r²=0.74, in February 2013, r²=0.52 in October 2012. The lowest correlations were found in horizontal mode when SWC
Simpson, Judy M.; Zwar, Nicholas; Hosseinzadeh, Hassan; Jorm, Louisa
2017-01-01
Background Estimating multimorbidity (presence of two or more chronic conditions) using administrative data is becoming increasingly common. We investigated (1) the concordance of identification of chronic conditions and multimorbidity using self-report survey and administrative datasets; (2) characteristics of people with multimorbidity ascertained using different data sources; and (3) whether the same individuals are classified as multimorbid using different data sources. Methods Baseline survey data for 90,352 participants of the 45 and Up Study—a cohort study of residents of New South Wales, Australia, aged 45 years and over—were linked to prior two-year pharmaceutical claims and hospital admission records. Concordance of eight self-report chronic conditions (reference) with claims and hospital data were examined using sensitivity (Sn), positive predictive value (PPV), and kappa (κ).The characteristics of people classified as multimorbid were compared using logistic regression modelling. Results Agreement was found to be highest for diabetes in both hospital and claims data (κ = 0.79, 0.78; Sn = 79%, 72%; PPV = 86%, 90%). The prevalence of multimorbidity was highest using self-report data (37.4%), followed by claims data (36.1%) and hospital data (19.3%). Combining all three datasets identified a total of 46 683 (52%) people with multimorbidity, with half of these identified using a single dataset only, and up to 20% identified on all three datasets. Characteristics of persons with and without multimorbidity were generally similar. However, the age gradient was more pronounced and people speaking a language other than English at home were more likely to be identified as multimorbid by administrative data. Conclusions Different individuals, with different combinations of conditions, are identified as multimorbid when different data sources are used. As such, caution should be applied when ascertaining morbidity from a single data source as the agreement
Crowe, Sonya; Seal, Andrew; Grijalva-Eternod, Carlos; Kerac, Marko
2014-01-01
Tackling childhood malnutrition is a global health priority. A key indicator is the estimated prevalence of malnutrition, measured by nutrition surveys. Most aspects of survey design are standardised, but data 'cleaning criteria' are not. These aim to exclude extreme values which may represent measurement or data-entry errors. The effect of different cleaning criteria on malnutrition prevalence estimates was unknown. We applied five commonly used data cleaning criteria (WHO 2006; EPI-Info; WHO 1995 fixed; WHO 1995 flexible; SMART) to 21 national Demographic and Health Survey datasets. These included a total of 163,228 children, aged 6-59 months. We focused on wasting (low weight-for-height), a key indicator for treatment programmes. Choice of cleaning criteria had a marked effect: SMART were least inclusive, resulting in the lowest reported malnutrition prevalence, while WHO 2006 were most inclusive, resulting in the highest. Across the 21 countries, the proportion of records excluded was 3 to 5 times greater when using SMART compared to WHO 2006 criteria, resulting in differences in the estimated prevalence of total wasting of between 0.5 and 3.8%, and differences in severe wasting of 0.4-3.9%. The magnitude of difference was associated with the standard deviation of the survey sample, a statistic that can reflect both population heterogeneity and data quality. Using these results to estimate case-loads for treatment programmes resulted in large differences for all countries. Wasting prevalence and caseload estimations are strongly influenced by choice of cleaning criterion. Because key policy and programming decisions depend on these statistics, variations in analytical practice could lead to inconsistent and potentially inappropriate implementation of malnutrition treatment programmes. We therefore call for mandatory reporting of cleaning criteria use so that results can be compared and interpreted appropriately. International consensus is urgently needed
Directory of Open Access Journals (Sweden)
Sonya Crowe
2014-05-01
Full Text Available Tackling childhood malnutrition is a global health priority. A key indicator is the estimated prevalence of malnutrition, measured by nutrition surveys. Most aspects of survey design are standardised, but data ‘cleaning criteria’ are not. These aim to exclude extreme values which may represent measurement or data-entry errors. The effect of different cleaning criteria on malnutrition prevalence estimates was unknown. We applied five commonly used data cleaning criteria (WHO 2006; EPI-Info; WHO 1995 fixed; WHO 1995 flexible; SMART to 21 national Demographic and Health Survey datasets. These included a total of 163,228 children, aged 6–59 months. We focused on wasting (low weight-for-height, a key indicator for treatment programmes. Choice of cleaning criteria had a marked effect: SMART were least inclusive, resulting in the lowest reported malnutrition prevalence, while WHO 2006 were most inclusive, resulting in the highest. Across the 21 countries, the proportion of records excluded was 3 to 5 times greater when using SMART compared to WHO 2006 criteria, resulting in differences in the estimated prevalence of total wasting of between 0.5 and 3.8%, and differences in severe wasting of 0.4–3.9%. The magnitude of difference was associated with the standard deviation of the survey sample, a statistic that can reflect both population heterogeneity and data quality. Using these results to estimate case-loads for treatment programmes resulted in large differences for all countries. Wasting prevalence and caseload estimations are strongly influenced by choice of cleaning criterion. Because key policy and programming decisions depend on these statistics, variations in analytical practice could lead to inconsistent and potentially inappropriate implementation of malnutrition treatment programmes. We therefore call for mandatory reporting of cleaning criteria use so that results can be compared and interpreted appropriately. International consensus
Directory of Open Access Journals (Sweden)
Alexandre Bernardino Lopes
2012-03-01
Full Text Available The use of geoid models to estimate the Mean Dynamic Topography was stimulated with the launching of the GRACE satellite system, since its models present unprecedented precision and space-time resolution. In the present study, besides the DNSC08 mean sea level model, the following geoid models were used with the objective of computing the MDTs: EGM96, EIGEN-5C and EGM2008. In the method adopted, geostrophic currents for the South Atlantic were computed based on the MDTs. In this study it was found that the degree and order of the geoid models affect the determination of TDM and currents directly. The presence of noise in the MDT requires the use of efficient filtering techniques, such as the filter based on Singular Spectrum Analysis, which presents significant advantages in relation to conventional filters. Geostrophic currents resulting from geoid models were compared with the HYCOM hydrodynamic numerical model. In conclusion, results show that MDTs and respective geostrophic currents calculated with EIGEN-5C and EGM2008 models are similar to the results of the numerical model, especially regarding the main large scale features such as boundary currents and the retroflection at the Brazil-Malvinas Confluence.A utilização de modelos geoidais na determinação da Topografia Dinâmica Média foi impulsionada com o lançamento dos satélites do sistema GRACE, já que seus modelos apresentam precisão e resolução espacial e temporal sem precedentes. No presente trabalho, além do modelo de nível médio do mar DNSC08, foram utilizados os seguintes modelos geoidais com o objetivo de calcular as TDMs: EGM96, EIGEN-5C e EGM2008. No método adotado, foram calculadas as respectivas correntes geostróficas para o Atlântico Sul a partir das TDMs. O grau e ordem dos modelos geoidais influenciam diretamente na determinação da TDM e correntes. Neste trabalho verificou-se que presença de ruídos da TDM requer a utilização de técnicas de filtragem
Nelson, Erik J; Hughes, John; Oakes, J Michael; Pankow, James S; Kulasingam, Shalini L
2014-09-01
Federally funded surveys of human papillomavirus (HPV) vaccine uptake are important for pinpointing geographically based health disparities. Although national and state level data are available, local (ie, county and postal code level) data are not due to small sample sizes, confidentiality concerns, and cost. Local level HPV vaccine uptake data may be feasible to obtain by targeting specific geographic areas through social media advertising and recruitment strategies, in combination with online surveys. Our goal was to use Facebook-based recruitment and online surveys to estimate local variation in HPV vaccine uptake among young men and women in Minnesota. From November 2012 to January 2013, men and women were recruited via a targeted Facebook advertisement campaign to complete an online survey about HPV vaccination practices. The Facebook advertisements were targeted to recruit men and women by location (25 mile radius of Minneapolis, Minnesota, United States), age (18-30 years), and language (English). Of the 2079 men and women who responded to the Facebook advertisements and visited the study website, 1003 (48.2%) enrolled in the study and completed the survey. The average advertising cost per completed survey was US $1.36. Among those who reported their postal code, 90.6% (881/972) of the participants lived within the previously defined geographic study area. Receipt of 1 dose or more of HPV vaccine was reported by 65.6% women (351/535), and 13.0% (45/347) of men. These results differ from previously reported Minnesota state level estimates (53.8% for young women and 20.8% for young men) and from national estimates (34.5% for women and 2.3% for men). This study shows that recruiting a representative sample of young men and women based on county and postal code location to complete a survey on HPV vaccination uptake via the Internet is a cost-effective and feasible strategy. This study also highlights the need for local estimates to assess the variation in HPV
National Oceanic and Atmospheric Administration, Department of Commerce — The GOA/AI Bottom Trawl Estimate database contains abundance estimates for the Alaska Biennial Bottom Trawl Surveys conducted in the Gulf of Alaska and the Aleutian...
An estimate of hernia prevalence in Sierra Leone from a nationwide community survey
Patel, Hiten D; Groen, Reinou S; Kamara, Thaim B; Samai, Mohamed; Farahzad, Mina M; Cassidy, Laura D; Kushner, Adam L; Wren, Sherry M
2016-01-01
Purpose A large number of unrepaired inguinal hernias is expected in sub-Saharan Africa where late presentation often results in incarceration, strangulation, or giant scrotal hernias. However, no representative population-based data is available to quantify the prevalence of hernias. We present data on groin masses in Sierra Leone to estimate prevalence, barriers to care, and associated disability. Methods A cluster randomized, cross-sectional household survey of 75 clusters of 25 households with 2 respondents each was designed to calculate the prevalence of and disability caused by groin hernias in Sierra Leone using a verbal head-to-toe examination. Barriers to hernia repairs were assessed by asking participants the main reason for delay in surgical care. Results Information was obtained from 3645 respondents in 1843 households, of which 1669 (46%) were male and included in the study. In total, 117 males or 7.01% (95% CI 5.64-8.38) reported a soft or reducible swelling likely representing a hernia with four men having two masses. Of the 93.2% who indicated the need for health care, only 22.2% underwent a procedure, citing limited funds (59.0%) as the major barrier to care. On disability assessment, 20.2% were not able to work secondary to the groin swelling. Conclusions The results indicate groin masses represent a major burden for the male population in Sierra Leone. Improving access to surgical care for adult patients with hernias and early intervention for children will be vital to address the burden of disease and prevent complications or limitations of daily activity. PMID:24241327
Maes, K.; Nimmen, K. Van; Lourens, E.; Rezayat, A.; Guillaume, P.; Roeck, G. De; Lombaert, G.
2016-06-01
This paper presents a verification of a joint input-state estimation algorithm using data obtained from in situ experiments on a footbridge. The estimation of the input and the system states is performed in a minimum-variance unbiased way, based on a limited number of response measurements and a system model. A dynamic model of the footbridge is obtained using a detailed finite element model that is updated using a set of experimental modal characteristics. The joint input-state estimation algorithm is used for the identification of two impact, harmonic, and swept sine forces applied to the bridge deck. In addition to these forces, unknown stochastic forces, such as wind loads, are acting on the structure. These forces, as well as measurement errors, give rise to uncertainty in the estimated forces and system states. Quantification of the uncertainty requires determination of the power spectral density of the unknown stochastic excitation, which is identified from the structural response under ambient loading. The verification involves comparing the estimated forces with the actual, measured forces. Although a good overall agreement is obtained between the estimated and measured forces, modeling errors prohibit a proper distinction between multiple forces applied to the structure for the case of harmonic and swept sine excitation.
Tsvetkov, Yu. P.; Brekhov, O. M.; Bondar, T. N.; Filippov, S. V.; Petrov, V. G.; Tsvetkova, N. M.; Frunze, A. Kh.
2014-03-01
Two global analytical models of the main magnetic field of the Earth (MFE) have been used to determine their potential in deriving an anomalous MFE from balloon magnetic surveys conducted at altitudes of ˜30 km. The daily mean spherical harmonic model (DMSHM) constructed from satellite data on the day of balloon magnetic surveys was analyzed. This model for the day of magnetic surveys was shown to be almost free of errors associated with secular variations and can be recommended for deriving an anomalous MFE. The error of the enhanced magnetic model (EMM) was estimated depending on the number of harmonics used in the model. The model limited by the first 13 harmonics was shown to be able to lead to errors in the main MFE of around 15 nT. The EMM developed to n = m = 720 and constructed on the basis of satellite and ground-based magnetic data fails to adequately simulate the anomalous MFE at altitudes of 30 km. To construct a representative model developed to m = n = 720, ground-based magnetic data should be replaced by data of balloon magnetic surveys for altitudes of ˜30 km. The results of investigations were confirmed by a balloon experiment conducted by Pushkov Institute of Terrestrial Magnetism, Ionosphere, and Radio Wave Propagation of the Russian Academy of Sciences and the Moscow Aviation Institute.
Nikitenko, Yaroslav
2015-01-01
The directional precision of the sample mean estimator was calculated analytically for the offset exponential and normal distributions in three-dimensional space both for a finite sample and for limiting cases. It was shown that the spherical projection of the sample mean of the shifted exponential distribution has connections with modified Bessel functions and with hypergeometric functions. It was shown explicitly how the distribution of the sample mean of the exponential pdf converges near the mode to the normal distribution. Approximation formulae for the distribution of the sample mean of the shifted exponential distribution and for its directional precision and for the precision of the estimation of the direction of shift of the normal distribution were obtained.
Mosewich, Amber D.; Hadd, Valerie; Crocker, Peter R. E.; Zumbo, Bruno D.
2013-01-01
Quality of life (QoL) is affected by issues specific to illness trajectory and thus, may differ, and potentially take on different meanings, at different stages in the cancer process. A widely used measure of QoL is the SF-36 Health Survey (SF-36; Ware 1993); therefore, support for its appropriateness in a given population is imperative. The…
Mosewich, Amber D.; Hadd, Valerie; Crocker, Peter R. E.; Zumbo, Bruno D.
2013-01-01
Quality of life (QoL) is affected by issues specific to illness trajectory and thus, may differ, and potentially take on different meanings, at different stages in the cancer process. A widely used measure of QoL is the SF-36 Health Survey (SF-36; Ware 1993); therefore, support for its appropriateness in a given population is imperative. The…
Directory of Open Access Journals (Sweden)
Faulkner Nathan
2009-10-01
Full Text Available Abstract Background The existing estimates of there being 250,000 - 350,000 children of problem drug users in the UK (ACMD, 2003 and 780,000 - 1.3 million children of adults with an alcohol problem (AHRSE, 2004 are extrapolations of treatment data alone or estimates from other countries, hence updated, local and broader estimates are needed. Methods The current work identifies profiles where the risk of harm to children could be increased by patterns of parental substance use and generates new estimates following secondary analysis of five UK national household surveys. Results The Health Survey for England (HSfE and General Household Survey (GHS (both 2004 generated consistent estimates - around 30% of children under-16 years (3.3 - 3.5 million in the UK lived with at least one binge drinking parent, 8% with at least two binge drinkers and 4% with a lone (binge drinking parent. The National Psychiatric Morbidity Survey (NPMS indicated that in 2000, 22% (2.6 million lived with a hazardous drinker and 6% (705,000 with a dependent drinker. The British Crime Survey (2004 and NPMS (2000 indicated that 8% (up to 978,000 of children lived with an adult who had used illicit drugs within that year, 2% (up to 256,000 with a class A drug user and 7% (up to 873,000 with a class C drug user. Around 335,000 children lived with a drug dependent user, 72,000 with an injecting drug user, 72,000 with a drug user in treatment and 108,000 with an adult who had overdosed. Elevated or cumulative risk of harm may have existed for the 3.6% (around 430,000 children in the UK who lived with a problem drinker who also used drugs and 4% (half a million where problem drinking co-existed with mental health problems. Stronger indicators of harm emerged from the Scottish Crime Survey (2000, according to which 1% of children (around 12,000 children had witnessed force being used against an adult in the household by their partner whilst drinking alcohol and 0.6% (almost 6000
State of the Practice in Software Effort Estimation: A Survey and Literature Review
Trendowicz, Adam; Münch, Jürgen; Jeffery, Ross
2014-01-01
Part 7: Project Management; International audience; Effort estimation is a key factor for software project success, defined as delivering software of agreed quality and functionality within schedule and budget. Traditionally, effort estimation has been used for planning and tracking project resources. Effort estimation methods founded on those goals typically focus on providing exact estimates and usually do not support objectives that have recently become important within the software indust...
DEFF Research Database (Denmark)
Sparrevohn, Claus Reedtz
2013-01-01
For many overfished marine stocks, recreational fishing continues even though recovery plans are implemented and commercial landings regulated. In such cases, unbiased and precise estimates of recreational harvest are important for successful management. Harvest estimation often relies on intervi......For many overfished marine stocks, recreational fishing continues even though recovery plans are implemented and commercial landings regulated. In such cases, unbiased and precise estimates of recreational harvest are important for successful management. Harvest estimation often relies...
Dietary intake estimates derived from the Multifactor Screener are rough estimates of usual intake of fruits and vegetables, fiber, calcium, servings of dairy, and added sugar. These estimates are not as accurate as those from more detailed methods (e.g., 24-hour recalls).
Directory of Open Access Journals (Sweden)
A. Akbulut
2012-04-01
Full Text Available In this study, Particle Swarm Optimization is applied for the estimation of the channel state transition probabilities. Unlike most other studies, where the channel state transition probabilities are assumed to be known and/or constant, in this study, these values are realistically considered to be time-varying parameters, which are unknown to the secondary users of the cognitive radio systems. The results of this study demonstrate the following: without any a priori information about the channel characteristics, even in a very transient environment, it is quite possible to achieve reasonable estimates of channel state transition probabilities with a practical and simple implementation.
Lo, Ching F.
1999-01-01
The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.
National estimates of Australian gambling prevalence: f indings from a dual‐frame omnibus survey
Youssef, G. J.; Jackson, A. C.; Pennay, D. W.; Francis, K. L.; Pennay, A.; Lubman, D. I.
2016-01-01
Abstract Background, aims and design The increase in mobile telephone‐only households may be a source of bias for traditional landline gambling prevalence surveys. Aims were to: (1) identify Australian gambling participation and problem gambling prevalence using a dual‐frame (50% landline and 50% mobile telephone) computer‐assisted telephone interviewing methodology; (2) explore the predictors of sample frame and telephone status; and (3) explore the degree to which sample frame and telephone status moderate the relationships between respondent characteristics and problem gambling. Setting and participants A total of 2000 adult respondents residing in Australia were interviewed from March to April 2013. Measurements Participation in multiple gambling activities and Problem Gambling Severity Index (PGSI). Findings Estimates were: gambling participation [63.9%, 95% confidence interval (CI) = 61.4–66.3], problem gambling (0.4%, 95% CI = 0.2–0.8), moderate‐risk gambling (1.9%, 95% CI = 1.3–2.6) and low‐risk gambling (3.0%, 95% CI = 2.2–4.0). Relative to the landline frame, the mobile frame was more likely to gamble on horse/greyhound races [odds ratio (OR) = 1.4], casino table games (OR = 5.0), sporting events (OR = 2.2), private games (OR = 1.9) and the internet (OR = 6.5); less likely to gamble on lotteries (OR = 0.6); and more likely to gamble on five or more activities (OR = 2.4), display problem gambling (OR = 6.4) and endorse PGSI items (OR = 2.4‐6.1). Only casino table gambling (OR = 2.9) and internet gambling (OR = 3.5) independently predicted mobile frame membership. Telephone status (landline frame versus mobile dual users and mobile‐only users) displayed similar findings. Finally, sample frame and/or telephone status moderated the relationship between gender, relationship status, health and problem gambling (OR = 2.9–7.6). Conclusion Given expected future increases in the
Barzaghi, Riccardo; Carrion, Daniela; Pepe, Massimiliano; Prezioso, Giuseppina
2016-01-01
Recent studies on the influence of the anomalous gravity field in GNSS/INS applications have shown that neglecting the impact of the deflection of vertical in aerial surveys induces horizontal and vertical errors in the measurement of an object that is part of the observed scene; these errors can vary from a few tens of centimetres to over one meter. The works reported in the literature refer to vertical deflection values based on global geopotential model estimates. In this paper we compared this approach with the one based on local gravity data and collocation methods. In particular, denoted by ξ and η, the two mutually-perpendicular components of the deflection of the vertical vector (in the north and east directions, respectively), their values were computed by collocation in the framework of the Remove-Compute-Restore technique, applied to the gravity database used for estimating the ITALGEO05 geoid. Following this approach, these values have been computed at different altitudes that are relevant in aerial surveys. The (ξ, η) values were then also estimated using the high degree EGM2008 global geopotential model and compared with those obtained in the previous computation. The analysis of the differences between the two estimates has shown that the (ξ, η) global geopotential model estimate can be reliably used in aerial navigation applications that require the use of sensors connected to a GNSS/INS system only above a given height (e.g., 3000 m in this paper) that must be defined by simulations. PMID:27472333
Directory of Open Access Journals (Sweden)
Riccardo Barzaghi
2016-07-01
Full Text Available Recent studies on the influence of the anomalous gravity field in GNSS/INS applications have shown that neglecting the impact of the deflection of vertical in aerial surveys induces horizontal and vertical errors in the measurement of an object that is part of the observed scene; these errors can vary from a few tens of centimetres to over one meter. The works reported in the literature refer to vertical deflection values based on global geopotential model estimates. In this paper we compared this approach with the one based on local gravity data and collocation methods. In particular, denoted by ξ and η, the two mutually-perpendicular components of the deflection of the vertical vector (in the north and east directions, respectively, their values were computed by collocation in the framework of the Remove-Compute-Restore technique, applied to the gravity database used for estimating the ITALGEO05 geoid. Following this approach, these values have been computed at different altitudes that are relevant in aerial surveys. The (ξ, η values were then also estimated using the high degree EGM2008 global geopotential model and compared with those obtained in the previous computation. The analysis of the differences between the two estimates has shown that the (ξ, η global geopotential model estimate can be reliably used in aerial navigation applications that require the use of sensors connected to a GNSS/INS system only above a given height (e.g., 3000 m in this paper that must be defined by simulations.
Evaluation of post-mortem estimated dental age versus real age: a retrospective 21-year survey
DEFF Research Database (Denmark)
Reppien, Kirsa; Sejrsen, Birgitte; Lynnerup, Niels
2006-01-01
The aim of the study was to evaluate the reliability of methods used for forensic dental age estimation. We analysed all cases over the last 21 years (1984-2004) of unidentified bodies that were examined for identification purposes (including age assessment), and of which secure identification...... was subsequently achieved. In total, the study included 51 cases and 7 different methods had been used for dental age estimation, with the Bang/Ramm and the Gustafson/Johanson methods being the most frequently applied. The age estimates had usually been recorded as 10-year intervals. Factual ages at death were...... the estimated age, and in six cases by more than 6 years. The average difference between factual age at death and estimated age was 4.5 years. The four subadults in the material were all correctly estimated within an age range of +/-3 years. Our study showed that forensic odontological age estimates...
DEFF Research Database (Denmark)
Jensen, Jesper; Tan, Zheng-Hua
2014-01-01
the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non......-linearity rather than a logarithmic is demonstrated....
DEFF Research Database (Denmark)
Jin, Shuanggen; Feng, Guiping; Andersen, Ole Baltazar
2014-01-01
and geostrophic current estimates from satellite gravimetry and altimetry are investigated and evaluated in China's marginal seas. The cumulative error in MDT from GOCE is reduced from 22.75 to 9.89 cm when compared to the Gravity Recovery and Climate Experiment (GRACE) gravity field model ITG-Grace2010 results...
McDonald, Lyman L.; Garner, Gerald W.; Garner, Gerald W.; Amstrup, Steven C.; Laake, Jeffrey L.; Manly, Bryan F.J.; McDonald, Lyman L.; Robertson, Donna G.
1999-01-01
The U.S. Marine Mammal Protection Act (MMPA) and International Agreement on the Conservation of Polar Bears mandate that boundaries and sizes of polar bear (Ursus maritimus) populations be known so they can be managed at optimum sustainable levels. However, data to estimate polar bear numbers for the Chukchi/Bering Sea and Beaufort Sea populations in Alaska are limited. We evaluated aerial line transect methodology for assessing the size of these Alaskan polar bear populations during pilot studies in spring 1987 and summer 1994. In April and May 1987 we flew 12.239 km of transect lines in the northern Bering, Chukchi, and western Beaufort seas. In June 1994 we flew 6.244 km of transect lines in a primary survey unit using a helicopter, and 5,701 km of transect lines in a secondary survey unit using a fixed-wing aircraft in the Beaufort Sea. We examined visibility bias in aerial transect surveys, double counts by independent observers, single-season mark-resight methods, the suitability of using polar bear sign to stratify the study area, and adaptive sampling methods. Fifteen polar bear groups were observed during the 1987 study. Probability of detecting bears decreased with increasing perpendicular distance from the transect line, and probability of detecting polar bear groups likely increased with increasing group size. We estimated population density in high density areas to be 446 km2/bear. In 1994, 15 polar bear groups were observed by independent front and rear seat observers on transect lines in the primary survey unit. Density estimates ranged from 284 km2/bear to 197 km2/bear depending on the model selected. Low polar bear numbers scattered over large areas of polar ice in 1987 indicated that spring is a poor time to conduct aerial surveys. Based on the 1994 survey we determined that ship-based helicopter or land-based fixed-wing aerial surveys conducted at the ice-edge in late summer-early fall may produce robust density estimates for polar bear
Energy Technology Data Exchange (ETDEWEB)
Leonhardt, J. (Bergbau-Forschung GmbH - Forschungsinstitut des Steinkohlenbergbauvereins, Essen (Germany, F.R.). Hauptabteilung Markscheidewesen und Gebirgsschlagverhuetung)
1989-05-01
The author reports a development project annual at removing the factor of human failure in the detection of hazards as for as possible by rendering the detection process proper more objective. This concerns in particular, those hazards which cannot be detected immediately in the pit. For their detection, mine surveying documents and mining charts need to be consulted. The project looked into hazards which were grouped into hazards resulting from subterranean water, fire areas or from zones with increased pressure for which models were developed, EDP programmes produced and tested, and hazard detection criteria determined. The methodology for data acquisition from mining surveying documents was defined and tested under practical conditions. (MOS).
Survey of electric utility demand for western coal. [1972-6 actual; 1976-1985 estimated
Energy Technology Data Exchange (ETDEWEB)
Asbury, J.G.; Kim, H.R.; Kouvalis, A.
1977-01-01
This report presents the results of a survey of electric utility demand for western coal. The sources of survey information are: (1) Federal Power Commission Form 423 data on utility coal purchases covering the period July 1972 through June 1976 and (2) direct survey data on utility coal-purchase intentions for power plants to be constructed by 1985. Price and quantity data for western coal consumed in existing plants have been assembled and presented to illustrate price and market-share trends in individual consuming regions over recent years. Coal source, quality, and quantity data are presented for existing and planned generating plants.
Directory of Open Access Journals (Sweden)
Noelia Hernández
2017-01-01
Full Text Available Although much research has taken place in WiFi indoor localization systems, their accuracy can still be improved. When designing this kind of system, fingerprint-based methods are a common choice. The problem with fingerprint-based methods comes with the need of site surveying the environment, which is effort consuming. In this work, we propose an approach, based on support vector regression, to estimate the received signal strength at non-site-surveyed positions of the environment. Experiments, performed in a real environment, show that the proposed method could be used to improve the resolution of fingerprint-based indoor WiFi localization systems without increasing the site survey effort.
Hernández, Noelia; Ocaña, Manuel; Alonso, Jose M; Kim, Euntai
2017-01-13
Although much research has taken place in WiFi indoor localization systems, their accuracy can still be improved. When designing this kind of system, fingerprint-based methods are a common choice. The problem with fingerprint-based methods comes with the need of site surveying the environment, which is effort consuming. In this work, we propose an approach, based on support vector regression, to estimate the received signal strength at non-site-surveyed positions of the environment. Experiments, performed in a real environment, show that the proposed method could be used to improve the resolution of fingerprint-based indoor WiFi localization systems without increasing the site survey effort.
Hernández, Noelia; Ocaña, Manuel; Alonso, Jose M.; Kim, Euntai
2017-01-01
Although much research has taken place in WiFi indoor localization systems, their accuracy can still be improved. When designing this kind of system, fingerprint-based methods are a common choice. The problem with fingerprint-based methods comes with the need of site surveying the environment, which is effort consuming. In this work, we propose an approach, based on support vector regression, to estimate the received signal strength at non-site-surveyed positions of the environment. Experiments, performed in a real environment, show that the proposed method could be used to improve the resolution of fingerprint-based indoor WiFi localization systems without increasing the site survey effort. PMID:28098773
Lakhan, Ram; Ekúndayò, Olúgbémiga T
2015-01-01
The Indian population suffers with significant burden of mental illness. The prevalence rate and its association with age and other demographic indicators are needed for planning purpose. This study attempted to calculate age-wise prevalence of mental illness for rural and urban settings, and its association with age. Data published in National Sample Survey Organization (2002) report on disability is used for the analysis. Spearman correlation for strength of association, z-test for difference in prevalence, and regression statistics for predicting the prevalence rate of mental illness are used. Overall population have 14.9/1000 prevalence of mental illness. It is higher in rural setting 17.1/1000 than urban 12.7/1000 (P < 0.001). There is a strong correlation found with age in rural (ϱ = 0.910, P = 0.001) and urban (ϱ = 0.940, P = 0.001). Results of this study confirm other epidemiological research in India. Large-population epidemiological studies are recommended.
Spratt, Jerome D.
1986-01-01
The 1985-86 spawning biomass estimate of Pacific herring, Clupea harengus pallasi, in San Francisco Bay is 49,000 tons. The relatively small population increases during 1984 and 1985 indicate that the population is rebuilding slowly from the 1983-84 season when only 40,000 tons of herring spawned. Spawning-ground surveys in Tomales Bay were inconclusive. Herring normally spawn in eelgrass, Zostera marina, beds; this season herring spawned unexpectedly in deeper water, disrupting our...
Directory of Open Access Journals (Sweden)
Singh Baldev
2011-08-01
Full Text Available 1. Abstract Background Dog population management is required in many locations to minimise the risks dog populations may pose to human health and to alleviate animal welfare problems. In many cities in India, Animal Birth Control (ABC projects have been adopted to provide population management. Measuring the impact of such projects requires assessment of dog population size among other relevant indicators. Methods This paper describes a simple mark-resight survey methodology that can be used with little investment of resources to monitor the number of roaming dogs in areas that are currently subject to ABC, provided the numbers, dates and locations of the dogs released following the intervention are reliably recorded. We illustrate the method by estimating roaming dog numbers in three cities in Rajasthan, India: Jaipur, Jodhpur and Jaisalmer. In each city the dog populations were either currently subject to ABC or had been very recently subject to such an intervention and hence a known number of dogs had been permanently marked with an ear-notch to identify them as having been operated. We conducted street surveys to record the current percentage of dogs in each city that are ear-notched and used an estimate for the annual survival of ear-notched dogs to calculate the current size of each marked population. Results Dividing the size of the marked population by the fraction of the dogs that are ear-notched we estimated the number of roaming dogs to be 36,580 in Jaipur, 24,853 in Jodhpur and 2,962 in Jaisalmer. Conclusions The mark-resight survey methodology described here is a simple way of providing population estimates for cities with current or recent ABC programmes that include visible marking of dogs. Repeating such surveys on a regular basis will further allow for evaluation of ABC programme impact on population size and reproduction in the remaining unsterilised dog population.
You, Sukkyung; Furlong, Michael; Felix, Erika; O'Malley, Meagan
2015-01-01
Social-emotional health influences youth developmental trajectories and there is growing interest among educators to measure the social-emotional health of the students they serve. This study replicated the psychometric characteristics of the Social Emotional Health Survey (SEHS) with a diverse sample of high school students (Grades 9-12; N =…
Alberotanza, L.; Lechi, G. M.
1977-01-01
Surveys employing a two channel Daedalus infrared scanner and multispectral photography were performed. The spring waning tide, the velocity of the water mass, and the types of suspended matter were among the topics studied. Temperature, salinity, sediment transport, and ebb stream velocity were recorded. The bottom topography was correlated with the dynamic characteristics of the sea surface.
Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.
1981-01-01
The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.
Directory of Open Access Journals (Sweden)
В. В. Зубарев
1999-05-01
Full Text Available Considered are features of modern microelectronic means of information display which are the most effective for airborne equipment use. Carried out is comparative analysis of their properties and features, mainly the information capacity, speed of response, reliability, coefficient of readiness, ergonomity, efficiency. Presented is a wide data base of various construction versions of indicators
Kawakita, Hideyo; Shinnaka, Yoshiharu; Jehin, Emmanuel; Decock, Alice; Hutsemekers, Damien; Manfroid, Jean
2016-10-01
Since molecules having identical protons can be classified into nuclear-spin isomers (e.g., ortho-H2O and para-H2O for water) and their inter-conversions by radiative and non-destructive collisional processes are believed to be very slow, the ortho-to-para abundance ratios (OPRs) of cometary volatiles such as H2O, NH3 and CH4 in coma have been considered as primordial characters of cometary molecules [1]. Those ratios are usually interpreted as nuclear-spin temperatures although the real meaning of OPRs is in strong debate. Recent progress in laboratory studies about nuclear-spin conversion in gas- and solid-phases [2,3] revealed short-time nuclear-spin conversions for water, and we have to reconsider the interpretation for observed OPRs of cometary volatiles. We have already performed the survey for OPRs of NH2 in more than 20 comets by large aperture telescopes with high-resolution spectrographs (UVES/VLT, HDS/Subaru, etc.) in the optical wavelength region [4]. The observed OPRs of ammonia estimated from OPRs of NH2, cluster around ~1.1 (cf. 1.0 as a high-temperature limit), indicative of ~30 K as nuclear-spin temperatures. We present our latest results for OPRs of cometary NH2 and discuss about the real meaning of OPRs of cometary ammonia, in relation to OPRs of water in cometary coma. Chemical processes in the inner coma may play an important role to achieve un-equilibrated OPRs of cometary volatiles in coma.This work was financially supported by MEXT Supported Program for the Strategic Research Foundation at Private Universities, 2014–2018 (No. S1411028) (HK) and by Graint-in-Aid for JSPS Fellows, 15J10864 (YS).References:[1] Mumma & Charnley, 2011, Annu. Rev. Astro. Astrophys. 49, 471.[2] Hama & Watanabe, 2013, Chem. Rev. 113, 8783.[3] Hama et al., 2008, Science 351, 6268.[4] Shinnaka et al., 2011, ApJ 729, 81.
Energy Technology Data Exchange (ETDEWEB)
Metzger, Brian D. [Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08542 (United States); Kaplan, David L. [Physics Department, University of Wisconsin-Milwaukee, Milwaukee, WI 53211 (United States); Berger, Edo, E-mail: bmetzger@astro.princeton.edu, E-mail: kaplan@uwm.edu, E-mail: eberger@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)
2013-02-20
Identifying the electromagnetic counterparts of gravitational wave (GW) sources detected by upcoming networks of advanced ground-based interferometers will be challenging, due in part to the large number of unrelated astrophysical transients within the {approx}10-100 deg{sup 2} sky localizations. A potential way to greatly reduce the number of such false positives is to limit detailed follow-up to only those candidates near galaxies within the GW sensitivity range of {approx}200 Mpc for binary neutron star mergers. Such a strategy is currently hindered by the fact that galaxy catalogs are grossly incomplete within this volume. Here, we compare two methods for completing the local galaxy catalog: (1) a narrowband H{alpha} imaging survey and (2) an H I emission line radio survey. Using H{alpha} fluxes, stellar masses (M {sub *}), and star formation rates (SFRs) from galaxies in the Sloan Digital Sky Survey (SDSS), combined with H I data from the GALEX Arecibo SDSS Survey and the Herschel Reference Survey, we estimate that an H{alpha} survey with a luminosity sensitivity of L {sub H{alpha}} = 10{sup 40} erg s{sup -1} at 200 Mpc could achieve a completeness of f {sup H{alpha}} {sub SFR} Almost-Equal-To 75% with respect to total SFR, but only f{sub M* Star-Operator }{sup H{alpha}} approx. 33% with respect to M {sub *} (due to lack of sensitivity to early-type galaxies). These numbers are significantly lower than those achieved by an idealized spectroscopic survey due to the loss of H{alpha} flux resulting from resolving out nearby galaxies and the inability to correct for the underlying stellar continuum. An H I survey with sensitivity similar to the proposed WALLABY survey on ASKAP could achieve f{sub SFR}{sup H{sub I}} Almost-Equal-To 80% and f{sub M Star-Operator }{sup H{sub I}} Almost-Equal-To 50%, somewhat higher than that of the H{alpha} survey. Finally, both H{alpha} and H I surveys should achieve {approx}> 50% completeness with respect to the host galaxies of
Directory of Open Access Journals (Sweden)
Truin Gert-Jan
2011-10-01
Full Text Available Abstract Background Considering the changes in dental healthcare, such as the increasing assertiveness of patients, the introduction of new dental professionals, and regulated competition, it becomes more important that general dental practitioners (GDPs take patients' views into account. The aim of the study was to compare patients' views on organizational aspects of general dental practices with those of GDPs and with GDPs' estimation of patients' views. Methods In a survey study, patients and GDPs provided their views on organizational aspects of a general dental practice. In a second, separate survey, GDPs were invited to estimate patients' views on 22 organizational aspects of a general dental practice. Results For 4 of the 22 aspects, patients and GDPs had the same views, and GDPs estimated patients' views reasonably well: 'Dutch-speaking GDP', 'guarantee on treatment', 'treatment by the same GDP', and 'reminder of routine oral examination'. For 2 aspects ('quality assessment' and 'accessibility for disabled patients' patients and GDPs had the same standards, although the GDPs underestimated the patients' standards. Patients had higher standards than GDPs for 7 aspects and lower standards than GDPs for 8 aspects. Conclusion On most aspects GDPs and patient have different views, except for social desirable aspects. Given the increasing assertiveness of patients, it is startling the GDP's estimated only half of the patients' views correctly. The findings of the study can assist GDPs in adapting their organizational services to better meet the preferences of their patients and to improve the communication towards patients.
Effect of co-operative fuzzy c-means clustering on estimates of three parameters AVA inversion
Indian Academy of Sciences (India)
Rajesh R Nair; Suresh Ch Kandpal
2010-04-01
We determine the degree of variation of model ﬁtness,to a true model based on amplitude variation with angle (AVA)methodology for a synthetic gas hydrate model,using co-operative fuzzy c-means clustering,constrained to a rock physics model.When a homogeneous starting model is used,with only traditional least squares optimization scheme for inversion,the variance of the parameters is found to be comparatively high.In this co-operative methodology,the output from the least squares inversion is fed as an input to the fuzzy scheme.Tests with co-operative inversion using fuzzy c-means with damped least squares technique and constraints derived from empirical relationship based on rock properties model show improved stability,model ﬁtness and variance for all the three parameters in comparison with the standard inversion alone.
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach
Chadha, V. K.; Laxminarayan, R.; Arinaminpathy, N.
2017-01-01
SUMMARY BACKGROUND: There is an urgent need for improved estimations of the burden of tuberculosis (TB). OBJECTIVE: To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. DESIGN: We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. RESULTS: Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8–156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. CONCLUSIONS: Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used. PMID:28284250
DEFF Research Database (Denmark)
Berg, Casper Willestofte; Nielsen, Anders; Kristensen, Kasper
2014-01-01
Indices of abundance from fishery-independent trawl surveys constitute an important source of information for many fish stock assessments. Indices are often calculated using area stratified sample means on age-disaggregated data, and finally treated in stock assessment models as independent...... observations. We evaluate a series of alternative methods for calculating indices of abundance from trawl survey data (delta-lognormal, delta-gamma, and Tweedie using Generalized Additive Models) as well as different error structures for these indices when used as input in an age-based stock assessment model...... the different indices produced. The stratified mean method is found much more imprecise than the alternatives based on GAMs, which are found to be similar. Having time-varying index variances is found to be of minor importance, whereas the independence assumption is not only violated but has significant impact...
Batt, Angela L; Wathen, John B; Lazorchak, James M; Olsen, Anthony R; Kincaid, Thomas M
2017-02-23
U.S. EPA conducted a national statistical survey of fish tissue contamination at 540 river sites (representing 82 954 river km) in 2008-2009, and analyzed samples for 50 persistent organic pollutants (POPs), including 21 PCB congeners, 8 PBDE congeners, and 21 organochlorine pesticides. The survey results were used to provide national estimates of contamination for these POPs. PCBs were the most abundant, being measured in 93.5% of samples. Summed concentrations of the 21 PCB congeners had a national weighted mean of 32.7 μg/kg and a maximum concentration of 857 μg/kg, and exceeded the human health cancer screening value of 12 μg/kg in 48% of the national sampled population of river km, and in 70% of the urban sampled population. PBDEs (92.0%), chlordane (88.5%) and DDT (98.7%) were also detected frequently, although at lower concentrations. Results were examined by subpopulations of rivers, including urban or nonurban and three defined ecoregions. PCBs, PBDEs, and DDT occur at significantly higher concentrations in fish from urban rivers versus nonurban; however, the distribution varied more among the ecoregions. Wildlife screening values previously published for bird and mammalian species were converted from whole fish to fillet screening values, and used to estimate risk for wildlife through fish consumption.
McCormick, J. L.; Whitney, D.; Schill, D. J.; Quist, Michael
2015-01-01
Accuracy of angler-reported data on steelhead, Oncorhynchus mykiss (Walbaum), harvest in Idaho, USA, was quantified by comparing data recorded on angler harvest permits to the numbers that the same group of anglers reported in an off-site survey. Anglers could respond to the off-site survey using mail or Internet; if they did not respond using these methods, they were called on the telephone. A majority of anglers responded through the mail, and the probability of responding by Internet decreased with increasing age of the respondent. The actual number of steelhead harvested did not appear to influence the response type. Anglers in the autumn 2012 survey overreported harvest by 24%, whereas anglers in the spring 2013 survey under-reported steelhead harvest by 16%. The direction of reporting bias may have been a function of actual harvest, where anglers harvested on average 2.6 times more fish during the spring fishery than the autumn. Reporting bias that is a function of actual harvest can have substantial management and conservation implications because the fishery will be perceived to be performing better at lower harvest rates and worse when harvest rates are higher. Thus, these findings warrant consideration when designing surveys and evaluating management actions.
Metzger, Brian D; Berger, Edo
2012-01-01
Identifying the electromagnetic counterparts of gravitational wave (GW) sources detected by upcoming networks of advanced ground-based interferometers will be challenging due in part to the large number of unrelated astrophysical transients within the ~10-100 square degree sky localizations. A potential way to greatly reduce the number of such false positives is to limit detailed follow-up to only those candidates near galaxies within the GW sensitivity range of ~200 Mpc for binary neutron star mergers. Such a strategy is currently hindered by the fact that galaxy catalogs are grossly incomplete within this volume. Here we compare two methods for completing the local galaxy catalog: (1) a narrow-band H-alpha imaging survey; and (2) an HI emission line radio survey. Using H-alpha fluxes, stellar masses (M_star), and star formation rates (SFR) from galaxies in the Sloan Digital Sky Survey (SDSS), combined with HI data from the GALEX Arecibo SDSS Survey and the Herschel Reference Survey, we estimate that a H-alp...
On-board capacity estimation of lithium iron phosphate batteries by means of half-cell curves
Marongiu, Andrea; Nlandi, Nsombo; Rong, Yao; Sauer, Dirk Uwe
2016-08-01
This paper presents a novel methodology for the on-board estimation of the actual battery capacity of lithium iron phosphate batteries. The approach is based on the detection of the actual degradation mechanisms by collecting plateau information. The tracked degradation modes are employed to change the characteristics of the fresh electrode voltage curves (mutual position and dimension), to reconstruct the full voltage curve and therefore to obtain the total capacity. The work presents a model which describes the relation between the single degradation modes and the electrode voltage curves characteristics. The model is then implemented in a novel battery management system structure for aging tracking and on-board capacity estimation. The working principle of the new algorithm is validated with data obtained from lithium iron phosphate cells aged in different operating conditions. The results show that both during charge and discharge the algorithm is able to correctly track the actual battery capacity with an error of approx. 1%. The use of the obtained results for the recalibration of a hysteresis model present in the battery management system is eventually presented, demonstrating the benefit of the tracked aging information for additional scopes.
Roozen, N. B.; Leclère, Q.; Ege, K.; Gerges, Y.
2017-03-01
This paper presents a new wave fitting approach to estimate the frequency dependent material properties of thin isotropic plate structures from an experimentally obtained vibrational field, exciting the plate at a single point. The method projects the measurement data on to an analytical image source model, in which Hankel's functions are used for a description of the wave fields emanating from the point of excitation, including the reflected wave fields from the edges of the finite plate. By minimizing the error between the projected field and the measured field, varying the complex wave number and the source strengths of the image sources, an optimum fit is searched for. Thus the source strengths of the image sources do not need to be determined theoretically, but are estimated from the fit on to the experimental data instead (thus avoiding difficulties in theoretically assessing the reflection coefficient of the edges of the plate). The approach uses a complex wavenumber fit, enabling the determination of the dynamic stiffness of the plate structure and its damping properties as function of frequency. The method is especially suited for plates with a sufficient amount of damping, excited at high frequencies.
A new method for improved hub height mean wind speed estimates using short-term hub height data
Energy Technology Data Exchange (ETDEWEB)
Lackner, Matthew A.; Rogers, Anthony L.; Manwell, James F.; McGowan, Jon G. [Wind Energy Center, Department of Mechanical and Industrial Engineering, University of Massachusetts Amherst, 160 Governors Dr., Amherst, MA 01003 (United States)
2010-10-15
The estimation of the wind resource at the hub height of a wind turbine is one of the primary goals of site assessment. Because the measurement heights of meteorological towers (met towers) are typically significantly lower than turbine hub heights, a shear model is generally needed to extrapolate the measured wind resource at the lower measurement height to the hub height of the turbine. This paper presents methods for improving the estimate of the hub height wind resource from met tower data through the use of ground-based remote sensing devices. The methods leverage the two major advantages of these devices: their portability and their ability to measure at the wind turbine hub height. Specifically, the methods rely on augmenting the one year of met tower measurements with short-term measurements from a ground-based remote sensing device. The results indicate that the methods presented are capable of producing substantial improvements in the accuracy and uncertainty of shear extrapolation predictions. The results suggest that the typical site assessment process can be reevaluated, and alternative strategies that utilize ground-based remote sensing devices can be incorporated to significantly improve the process. (author)
Bettens, Frédéric; Grenez, Francis; Schoentgen, Jean
2005-01-01
The article presents an analysis of vocal dysperiodicities in connected speech produced by dysphonic speakers. The processing is based on a comparison of the present speech fragment with future and past fragments. The size of the dysperiodicity estimate is zero for periodic speech signals. A feeble increase of the vocal dysperiodicity is guaranteed to produce a feeble increase of the estimate. No spurious noise boosting occurs owing to cycle insertion and omission errors, or phonetic segment boundary artifacts. Additional objectives of the study have been investigating whether deviations from periodicity are larger or more commonplace in connected speech than in sustained vowels, and whether sentences that comprise frequent voice onsets and offsets are noisier than sentences that comprise few. The corpora contain sustained vowels as well as grammatically- and phonetically matched sentences. An acoustic marker that correlates with the perceived degree of hoarseness summarizes the size of the dysperiodicities. The marker values for sustained vowels have been highly correlated with those for connected speech, and the marker values for sentences that comprise few voiced/unvoiced transients have been highly correlated with the marker values for sentences that comprise many.
Alonso, Jordi; Vilagut, Gemma; Chatterji, Somnath; Heeringa, Steven; Schoenbaum, Michael; Üstün, T. Bedirhan; Rojas-Farreras, Sonia; Angermeyer, Matthias; Bromet, Evelyn; Bruffaerts, Ronny; de Girolamo, Giovanni; Gureje, Oye; Haro, Josep Maria; Karam, Aimee N.; Kovess, Viviane; Levinson, Daphna; Liu, Zhaorui; Mora, Maria Elena Medina; Ormel, J.; Posada-Villa, Jose; Uda, Hidenori; Kessler, Ronald C.
2010-01-01
Background The methodology commonly used to estimate disease burden, featuring ratings of severity of individual conditions, has been criticized for ignoring comorbidity. A methodology that addresses this problem is proposed and illustrated here with data from the WHO World Mental Health Surveys. Although the analysis is based on self-reports about one’s own conditions in a community survey, the logic applies equally well to analysis of hypothetical vignettes describing comorbid condition profiles. Methods Face-to-face interviews in 13 countries (six developing, nine developed; n = 31,067; response rate = 69.6%) assessed 10 classes of chronic physical and 9 of mental conditions. A visual analog scale (VAS) was used to assess overall perceived health. Multiple regression analysis with interactions for comorbidity was used to estimate associations of conditions with VAS. Simulation was used to estimate condition-specific effects. Results The best-fitting model included condition main effects and interactions of types by numbers of conditions. Neurological conditions, insomnia, and major depression were rated most severe. Adjustment for comorbidity reduced condition-specific estimates with substantial between-condition variation (.24–.70 ratios of condition-specific estimates with and without adjustment for comorbidity). The societal-level burden rankings were quite different from the individual-level rankings, with the highest societal-level rankings associated with conditions having high prevalence rather than high individual-level severity. Conclusions Plausible estimates of disorder-specific effects on VAS can be obtained using methods that adjust for comorbidity. These adjustments substantially influence condition-specific ratings. PMID:20553636
Energy Technology Data Exchange (ETDEWEB)
Neves, Marcelo Azevedo; Andrade Junior, Rubens de [Universidade Federal, Rio de Janeiro, RJ (Brazil). Dept. de Eletrotecnica. Lab. de Aplicacoes de Supercondutores (LASUP); Costa, Giancarlo Cordeiro da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Metodos Computacionais em Engenharia; Pereira, Agnaldo Souza; Nicolsky, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Inst. de Fisica
2002-09-01
This work presents a mean field estimation of J{sub C} as a bulk characteristic of YBCO blocks. That average J{sub C} allows a good fitting of the finite-element-method simulation of the levitation forces to experimental results. That agreement is quite enough for levitation requirements of device projects, at short gaps and zero field cooling process, within the Bean model. The physical characterization for that estimation was made measuring the interaction force between the PM and one YBCO block in 1-D and mapping the trapped magnetic field in those blocks in 2-D. (author)
Ortland, David A.
2017-04-01
Satellites provide a global view of the structure in the fields that they measure. In the mesosphere and lower thermosphere, the dominant features in these fields at low zonal wave number are contained in the zonal mean, quasi-stationary planetary waves, and tide components. Due to the nature of the satellite sampling pattern, stationary, diurnal, and semidiurnal components are aliased and spectral methods are typically unable to separate the aliased waves over short time periods. This paper presents a data processing scheme that is able to recover the daily structure of these waves and the zonal mean state. The method is validated by using simulated data constructed from a mechanistic model, and then applied to Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) temperature measurements. The migrating diurnal tide extracted from SABER temperatures for 2009 has a seasonal variability with peak amplitude (20 K at 95 km) in February and March and minimum amplitude (less than 5 K at 95 km) in early June and early December. Higher frequency variability includes a change in vertical structure and amplitude during the major stratospheric warming in January. The migrating semidiurnal tide extracted from SABER has variability on a monthly time scale during January through March, minimum amplitude in April, and largest steady amplitudes from May through September. Modeling experiments were performed that show that much of the variability on seasonal time scales in the migrating tides is due to changes in the mean flow structure and the superposition of the tidal responses to water vapor heating in the troposphere and ozone heating in the stratosphere and lower mesosphere.
Estimation of illicit drug use in the main cities of Colombia by means of urban wastewater analysis
Energy Technology Data Exchange (ETDEWEB)
Bijlsma, Lubertus; Botero-Coy, Ana M. [Research Institute for Pesticides and Water (IUPA), University Jaume I, Castellón (Spain); Rincón, Rolando J. [Chemistry Department, Faculty of Sciences, University Antonio Nariño (Colombia); Peñuela, Gustavo A. [Grupo GDCON, Facultad de Ingeniería, Universidad de Antioquia, 70 # 52-21, Medellin (Colombia); Hernández, Félix, E-mail: felix.hernandez@uji.es [Research Institute for Pesticides and Water (IUPA), University Jaume I, Castellón (Spain)
2016-09-15
Wastewater-based epidemiology (WBE) relies on the principle that traces of compounds, which a population is exposed to or consume, are excreted unchanged or as metabolites in urine and/or feces, and ultimately end up in the sewer network. Measuring target metabolic residues i.e. biomarkers in raw urban wastewater allows identifying the exposure or use of substances of interest in a community. Up to date, the most popular application of WBE is the estimation of illicit drug use and studies have been made mainly across Europe, which has allowed estimating and comparing drug use in many European cities. However, until now a comprehensive study applying WBE on the most frequently consumed illicit drugs has not been performed in South American countries. In this work, we applied this approach to samples from Colombia, selecting two of the most populated cities: Bogotá and Medellin. Several biomarkers were selected to estimate drug use of cocaine, cannabis, amphetamine, methamphetamine, MDMA (ecstasy), heroin and ketamine. Composite samples (24-h) were collected at the corresponding municipal wastewater treatment plants. Sample treatment was performed at location by applying solid-phase extraction (SPE). Before SPE, the samples were spiked with appropriate isotope labeled internal standards. In parallel, samples (spiked with the analytes under study at two concentration levels) were also processed for quality control. Analysis of influent wastewater was made by liquid chromatography-tandem mass spectrometry, with triple quadrupole analyzer. Data shown in this paper reveal a high use of cocaine by the population of the selected Colombian cities, particularly from Medellin, while the use of other illicit drugs were low. The relevance of using quality control samples, particularly in collaborative studies, as those presented in this work, where research groups from different countries participate and where the samples had to be shipped overseas, is highlighted in this
Directory of Open Access Journals (Sweden)
Sabitha Gauni
2014-03-01
Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.
Energy Technology Data Exchange (ETDEWEB)
Fassinou, Wanignon Ferdinand; Koua, Kamenan Blaise; Toure, Siaka [Laboratoire d' Energie Solaire, UFR-SSMT, Universite de Cocody (Cote d' Ivoire), 22BP582 Abidjan 22 (Ivory Coast); Sako, Aboubakar; Fofana, Alhassane [Laboratoire de Physique de l' Atmosphere et de Mecanique des Fluides, UFR-SSMT, Universite de Cocody (Cote d' Ivoire), 22BP582 Abidjan 22 (Ivory Coast)
2010-12-15
High heating value (HHV) is an important property which characterises the energy content of a fuel such as solid, liquid and gaseous fuels. The previous assertion is particularly important for vegetable oils and biodiesels fuels which are expected to replace fossil oils. Estimation of the HHV of vegetable oils and biodiesels by using their fatty acid composition is the aim of this paper. The comparison between the HHVs predicted by the method and those obtained experimentally gives an average bias error of -0.84% and an average absolute error of 1.71%. These values show the utility, the validity and the applicability of the method to vegetable oils and their derivatives. (author)
Fotilas, Panayiotis; Batzias, Athanasios F.
2009-08-01
A methodological framework designed/developed under the form of an algorithmic procedure (including 20 activity stages and 10 decision nodes) has been applied for multicriteria ranking of models. The criteria used are: fitting to experimental data, agreement with theoretical aspects, model simplicity, experimental falsifiability, progressiveness, and relation to other ISs, as proved by a common path/rationale of deduction. An implementation is presented referring to the selection of pore ideal structure of anodized aluminium among the alternatives: cylindrical (A1), truncated-cone-like (A2), trumpet-like (A3), vesica-like (A4), multiple-base (A5), and tilted-cylinder-like (A6). The alternative A2 (implying corresponding specific surface estimation of the anodic film) was ranked first and the solution was proved to be robust.
Naranjo, Ramon C.
2013-01-01
Biochemical reactions that occur in the hyporheic zone are highly dependent on the time solutes that are in contact with sediments of the riverbed. In this investigation, we developed a 2-D longitudinal flow and solute-transport model to estimate the spatial distribution of mean residence time in the hyporheic zone. The flow model was calibrated using observations of temperature and pressure, and the mean residence times were simulated using the age-mass approach for steady-state flow conditions. The approach used in this investigation includes the mixing of different ages and flow paths of water through advection and dispersion. Uncertainty of flow and transport parameters was evaluated using standard Monte Carlo and the generalized likelihood uncertainty estimation method. Results of parameter estimation support the presence of a low-permeable zone in the riffle area that induced horizontal flow at a shallow depth within the riffle area. This establishes shallow and localized flow paths and limits deep vertical exchange. For the optimal model, mean residence times were found to be relatively long (9–40.0 days). The uncertainty of hydraulic conductivity resulted in a mean interquartile range (IQR) of 13 days across all piezometers and was reduced by 24% with the inclusion of temperature and pressure observations. To a lesser extent, uncertainty in streambed porosity and dispersivity resulted in a mean IQR of 2.2 and 4.7 days, respectively. Alternative conceptual models demonstrate the importance of accounting for the spatial distribution of hydraulic conductivity in simulating mean residence times in a riffle-pool sequence.
Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey
2015-04-01
Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites
Computing poverty measures with survey data
Philippe Van Kerm
2009-01-01
I discuss estimation of poverty measures from household survey data in Stata and show how to derive analytic standard errors that take into account survey design features. Where needed, standard errors are adjusted for the estimation of the poverty line as a fraction of the mean or median income. The linearization approach based on influence functions is generally applicable to many estimators.
Estimating sea floor dynamics in the Southern North Sea to improve bathymetric survey planning
Dorst, Leendert Louis
2009-01-01
Safe nautical charts require a carefully designed bathymetric survey policy, especially in shallow sandy seas that potentially have dynamic sea floor patterns. Bathymetric resurveying at sea is a costly process with limited resources, though. A pattern on the sea floor known as tidal sand waves is c
The estimation of sea floor dynamics from bathymetric surveys of a sand wave area
Dorst, Leendert; Roos, Pieter C.; Hulscher, Suzanne J.M.H.; Lindenbergh, R.C.
2009-01-01
The analysis of series of offshore bathymetric surveys provides insight into the morphodynamics of the sea floor. This knowledge helps to improve resurvey policies for the maintenance of port approaches and nautical charting, and to validate morphodynamic models. We propose a method for such an anal
Albanese, Brett; Owers, Katharine A.; Weiler, Deborah A.; Pruitt, William
2011-01-01
There is an ongoing need to monitor the status of imperiled fishes in the southeastern United States using effective methods. Visual surveys minimize harm to target species, but few studies have specifically examined their effectiveness compared to other methods or accounted for imperfect species de
Albanese, Brett; Owers, Katharine A.; Weiler, Deborah A.; Pruitt, William
2011-01-01
There is an ongoing need to monitor the status of imperiled fishes in the southeastern United States using effective methods. Visual surveys minimize harm to target species, but few studies have specifically examined their effectiveness compared to other methods or accounted for imperfect species de
Estimated Use of Water in the United States in 1975. Geological Survey Circular 765.
Murray, C. Richard; Reeves, E. Bodette
The United States Geological Survey has compiled data on water use in this country every fifth year since 1950. This document is the most recent of this series and presents data on water withdrawn for use in the United States in 1975. In the introduction, recent and present water use studies are discussed along with a description of the…
Directory of Open Access Journals (Sweden)
Rune Palerud
2008-12-01
Full Text Available The project “Environmental Monitoring and Modelling of Aquaculture in the Philippines” known as EMMA, was undertaken by the National Integrated Fisheries Technology Development Centre (NIFTDC of the Bureau of Fisheries and Aquatic Resources (BFAR and Akvaplan-niva AS of Tromsø, Norway. The project was funded by the Norwegian Agency for Development Cooperation (NORAD. This project tested survey equipment for the monitoring of aquaculture impact to the water column and sediment. Baseline surveys were undertaken as the goal of the study was to develop suitable aquaculture monitoring techniques and adapt predictive models to assist in identifying risk areas for aquaculture and allow planned development of sustainable aquaculture. Three different locations were chosen as case studies - Bolinao, Pangasinan (marine site, Dagupan (brackish water site, and Taal Lake (freshwater site. Production surveys were also undertaken to estimate production and nutrient outputs to the water bodies in order to be able to link aquaculture production with severity and extent of impacts. Different methodologies for the estimation of production were tested to find a cost effective and accurate methodology.
Energy Technology Data Exchange (ETDEWEB)
Ramiro, A.; Nunez, M.; Reyes, J. J.; Gonzalez, J. F.; Sabio, E.; Gonzalez-Garcia, C. M.; Ganan, J.; Roman, S.
2004-07-01
In a previous work, we have found correlation expressions that permit to estimate the mean monthly values of daily diffuse and direct solar irradiation on a horizontal surface in function of some weather parameters. In this work, the incident radiation on a horizontal surface has been estimated in thirty zones of Extremadura by means of weather data from existing stations located in these zones and its orography. The weather data used have been the monthly average values of the highest temperatures and the sunshine fraction. These monthly average values have been obtained from measurements carried out in the weather stations during the period 1985-2002. The results are presented as interactive maps in Arc view language, associated to a conventional data base. (Author)
Pinotti, M; Paone, N
1996-06-01
A laser Doppler anemometer (LDA) was used to obtain the mean velocity and the Reynolds stress fields in the inner channels of a well-known centrifugal vaneless pump (Bio-pump). Effects of the excessive flow resistance against which an occlusive pump operates in some surgical situations, such as cardiopulmonary bypass, are illustrated. The velocity vector field obtained from LDA measurements reveals that the constraint-forced vortex provides pumping action in a restricted area in the core of the pump. In such situations, recirculating zones dominate the flow and consequently increase the damage to blood cells and raise the risk of thrombus formation in the device. Reynolds normal and shear stress fields were obtained in the entry flow for the channel formed by two rotating cones to illustrate the effects of flow disturbances on the potential for blood cell damage.
Directory of Open Access Journals (Sweden)
C. Lanni
2012-11-01
Full Text Available Topographic index-based hydrological models have gained wide use to describe the hydrological control on the triggering of rainfall-induced shallow landslides at the catchment scale. A common assumption in these models is that a spatially continuous water table occurs simultaneously across the catchment. However, during a rainfall event isolated patches of subsurface saturation form above an impeding layer and their hydrological connectivity is a necessary condition for lateral flow initiation at a point on the hillslope.
Here, a new hydrological model is presented, which allows us to account for the concept of hydrological connectivity while keeping the simplicity of the topographic index approach. A dynamic topographic index is used to describe the transient lateral flow that is established at a hillslope element when the rainfall amount exceeds a threshold value allowing for (a development of a perched water table above an impeding layer, and (b hydrological connectivity between the hillslope element and its own upslope contributing area. A spatially variable soil depth is the main control of hydrological connectivity in the model. The hydrological model is coupled with the infinite slope stability model and with a scaling model for the rainfall frequency–duration relationship to determine the return period of the critical rainfall needed to cause instability on three catchments located in the Italian Alps, where a survey of soil depth spatial distribution is available. The model is compared with a quasi-dynamic model in which the dynamic nature of the hydrological connectivity is neglected. The results show a better performance of the new model in predicting observed shallow landslides, implying that soil depth spatial variability and connectivity bear a significant control on shallow landsliding.
Directory of Open Access Journals (Sweden)
Yan Cui
2016-03-01
Full Text Available Introduction: HIV incidence is an important measure for monitoring the development of the epidemic, but it is difficult to ascertain. We combined serial HIV prevalence and mortality data to estimate HIV incidence among key affected populations (KAPs in China. Methods: Serial cross-sectional surveys were conducted among KAPs from 2010 to 2014. Trends in HIV prevalence were assessed by the Cochran-Armitage test, adjusted by risk group. HIV incidence was estimated from a mathematical model that describes the relationship between changes in HIV incidence with HIV prevalence and mortality. Results: The crude HIV prevalence for the survey samples remained stable at 1.1 to 1.2% from 2010 to 2014. Among drug users (DUs, HIV prevalence declined from 4.48 to 3.29% (p<0.0001, and among men who have sex with men (MSM, HIV prevalence increased from 5.73 to 7.75% (p<0.0001. Changes in HIV prevalence among female sex workers (FSWs and male patients of sexually transmitted disease clinics were more modest but remained statistically significant (all p<0.0001. The MSM population had the highest incidence estimates at 0.74% in 2011, 0.59% in 2012, 0.57% in 2013 and 0.53% in 2014. Estimates of the annual incidence for DUs and FSWs were very low and may not be reliable. Conclusions: Serial cross-sectional prevalence data from representative samples may be another approach to construct approximate estimates of national HIV incidence among key populations. We observed that the MSM population had the highest incidence for HIV among high-risk groups in China, and we suggest that interventions targeting MSM are urgently needed to curb the growing HIV epidemic.
Asquith, William H.; Barbie, Dana L.
2014-01-01
In 2013, the U.S. Geological Survey (USGS) operated more than 500 continuous streamgages (streamflow-gaging stations) in Texas. In cooperation with the Texas Water Development Board, the USGS evaluated mean annual streamflow data for 38 selected streamgages that were active as of water year 2012. The 38 streamgages have annual mean streamflow data considered natural and unregulated. Collected annual mean streamflow data for a single streamgage ranged from 49 to 97 cumulative years. The nonparametric Kendall’s tau statistical test was used to detect monotonic trends in annual mean streamflow over time. The monotonic trend analysis detected 2 statistically significant upward trends (0.01 one-tail significance), 1 statistically significant downward trend (0.01 one-tail significance level), and 35 instances of no statistically significant trend (0.02 two-tailed significance level). The Theil slope estimate of a regression slope of annual mean streamflow with time was computed for the three stations where trends in streamflow were detected: 2 increasing Theil slopes were measured (+0.40 and +2.72 cubic feet per second per year, respectively), and 1 decreasing Theil slope (–0.24 cubic feet per second per year) was measured.
Survey of State-Level Cost and Benefit Estimates of Renewable Portfolio Standards
Energy Technology Data Exchange (ETDEWEB)
Heeter, J.; Barbose, G.; Bird, L.; Weaver, S.; Flores-Espino, F.; Kuskova-Burns, K.; Wiser, R.
2014-05-01
Most renewable portfolio standards (RPS) have five or more years of implementation experience, enabling an assessment of their costs and benefits. Understanding RPS costs and benefits is essential for policymakers evaluating existing RPS policies, assessing the need for modifications, and considering new policies. This study provides an overview of methods used to estimate RPS compliance costs and benefits, based on available data and estimates issued by utilities and regulators. Over the 2010-2012 period, average incremental RPS compliance costs in the United States were equivalent to 0.8% of retail electricity rates, although substantial variation exists around this average, both from year-to-year and across states. The methods used by utilities and regulators to estimate incremental compliance costs vary considerably from state to state and a number of states are currently engaged in processes to refine and standardize their approaches to RPS cost calculation. The report finds that state assessments of RPS benefits have most commonly attempted to quantitatively assess avoided emissions and human health benefits, economic development impacts, and wholesale electricity price savings. Compared to the summary of RPS costs, the summary of RPS benefits is more limited, as relatively few states have undertaken detailed benefits estimates, and then only for a few types of potential policy impacts. In some cases, the same impacts may be captured in the assessment of incremental costs. For these reasons, and because methodologies and level of rigor vary widely, direct comparisons between the estimates of benefits and costs are challenging.
Fan, Z H
2003-01-01
Sunyaev-Zel'dovich Effect (SZE) cluster surveys are anticipated to yield tight constraints on cosmological parameters such as the equation of state of dark energy. In this paper, we study the impact of relativistic corrections of the thermal SZE on the cluster number counts expected from a cosmological model and thus, assuming that other cosmological parameters are known to high accuracies, on the determination of the $w$ parameter and $\\sigma_8$ from a SZE cluster survey, where $w=p/\\rho$ with $p$ the pressure and $\\rho$ the density of dark energy, and $\\sigma_8$ is the rms of the extrapolated linear density fluctuation smoothed over $8\\hbox{Mpc}h^{-1}$. For the purpose of illustrating the effects of relativistic corrections, our analyses mainly focus on $\
Wernly, John F.; Zajd, Jr., Henry J.; Coon, William F.
2016-10-05
During 2015, the U.S. Geological Survey, in cooperation with the City of Ithaca, New York, and the New York State Department of State, conducted a bathymetric survey of the lower Sixmile Creek reservoir in Tompkins County, New York. A former water-supply reservoir for the City of Ithaca, the reservoir is no longer a functional component of Ithaca’s water-supply system, having been replaced by a larger reservoir less than a mile upstream in 1911. Excessive sedimentation has substantially reduced the reservoir’s water-storage capacity and made the discharge gate at the base of the 30-foot dam, which creates the reservoir, inoperable. U.S. Geological Survey personnel collected bathymetric data by using an acoustic Doppler current profiler. Across more than half of the approximately 14-acre reservoir, depths were manually measured because of interference from aquatic vegetation with the acoustic Doppler current profiler. City of Ithaca personnel created a bottom-elevation surface from these depth data. A second surface was created from depths that were manually measured by City of Ithaca personnel during 1938. Surface areas and storage capacities were computed at 1-foot increments of elevation for both bathymetric surveys. The results indicate that the current storage capacity of the reservoir at its normal water-surface elevation is about 84 acre-feet and that sediment accumulated between 1938 and 2015 has decreased the reservoir’s capacity by about 68 acre-feet. This sediment load is attributed to annual inputs from the watershed above the reservoir, as well as from an episodic landslide that filled a large part of the reservoir along its northern edge in 1949.
Directory of Open Access Journals (Sweden)
Gilly A. Hendrie
2017-01-01
Full Text Available There are few dietary assessment tools that are scientifically developed and freely available online. The Commonwealth Scientific and Industrial Research Organisation (CSIRO Healthy Diet Score survey asks questions about the quantity, quality, and variety of foods consumed. On completion, individuals receive a personalised Diet Score—reflecting their overall compliance with the Australian Dietary Guidelines. Over 145,000 Australians have completed the survey since it was launched in May 2015. The average Diet Score was 58.8 out of a possible 100 (SD = 12.9. Women scored higher than men; older adults higher than younger adults; and normal weight adults higher than obese adults. It was most common to receive feedback about discretionary foods (73.8% of the sample, followed by dairy foods (55.5% and healthy fats (47.0%. Results suggest that Australians’ diets are not consistent with the recommendations in the guidelines. The combination of using technology and providing the tool free of charge has attracted a lot of traffic to the website, providing valuable insights into what Australians’ report to be eating. The use of technology has also enhanced the user experience, with individuals receiving immediate and personalised feedback. This survey tool will be useful to monitor population diet quality and understand the degree to Australians’ diets comply with dietary guidelines.
Estimating $\\beta$ from redshift-space distortions in the 2dF galaxy survey
Hatton, S J
1999-01-01
Given the failure of existing models for redshift-space distortions to provide a highly accurate measure of the beta-parameter, and the ability of forthcoming surveys to obtain data with very low random errors, it becomes necessary to develop better models for these distortions. Here we review the failures of the commonly used velocity dispersion models and present an empirical method for extracting beta from the quadrupole statistic that has little systematic offset over a wide range of beta and cosmologies. This empirical model is then applied to an ensemble of mock 2dF southern strip catalogues to illustrate the technique and see how accurately we can recover the true value of beta. We compare this treatment with the error we expect to find due to the finite volume of the survey. We find that non-linear effects reduce the range of scales over which beta can be fitted, and introduce covariances between nearby modes in excess of those introduced by the convolution with the survey window function. The result ...
Hendrie, Gilly A.; Baird, Danielle; Golley, Rebecca K.; Noakes, Manny
2017-01-01
There are few dietary assessment tools that are scientifically developed and freely available online. The Commonwealth Scientific and Industrial Research Organisation (CSIRO) Healthy Diet Score survey asks questions about the quantity, quality, and variety of foods consumed. On completion, individuals receive a personalised Diet Score—reflecting their overall compliance with the Australian Dietary Guidelines. Over 145,000 Australians have completed the survey since it was launched in May 2015. The average Diet Score was 58.8 out of a possible 100 (SD = 12.9). Women scored higher than men; older adults higher than younger adults; and normal weight adults higher than obese adults. It was most common to receive feedback about discretionary foods (73.8% of the sample), followed by dairy foods (55.5%) and healthy fats (47.0%). Results suggest that Australians’ diets are not consistent with the recommendations in the guidelines. The combination of using technology and providing the tool free of charge has attracted a lot of traffic to the website, providing valuable insights into what Australians’ report to be eating. The use of technology has also enhanced the user experience, with individuals receiving immediate and personalised feedback. This survey tool will be useful to monitor population diet quality and understand the degree to Australians’ diets comply with dietary guidelines. PMID:28075355
Older adults' beliefs about physician-estimated life expectancy: a cross-sectional survey
Directory of Open Access Journals (Sweden)
Bynum Debra L
2006-02-01
Full Text Available Abstract Background Estimates of life expectancy assist physicians and patients in medical decision-making. The time-delayed benefits for many medical treatments make an older adult's life expectancy estimate particularly important for physicians. The purpose of this study is to assess older adults' beliefs about physician-estimated life expectancy. Methods We performed a mixed qualitative-quantitative cross-sectional study in which 116 healthy adults aged 70+ were recruited from two local retirement communities. We interviewed them regarding their beliefs about physician-estimated life expectancy in the context of a larger study on cancer screening beliefs. Semi-structured interviews of 80 minutes average duration were performed in private locations convenient to participants. Demographic characteristics as well as cancer screening beliefs and beliefs about life expectancy were measured. Two independent researchers reviewed the open-ended responses and recorded the most common themes. The research team resolved disagreements by consensus. Results This article reports the life-expectancy results portion of the larger study. The study group (n = 116 was comprised of healthy, well-educated older adults, with almost a third over 85 years old, and none meeting criteria for dementia. Sixty-four percent (n = 73 felt that their physicians could not correctly estimate their life expectancy. Sixty-six percent (n = 75 wanted their physicians to talk with them about their life expectancy. The themes that emerged from our study indicate that discussions of life expectancy could help older adults plan for the future, maintain open communication with their physicians, and provide them knowledge about their medical conditions. Conclusion The majority of the healthy older adults in this study were open to discussions about life expectancy in the context of discussing cancer screening tests, despite awareness that their physicians' estimates could be inaccurate
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Anderson, D Mark; Elsea, David
2015-12-01
In this note, we use data from the national and state Youth Risk Behavior Surveys for the period 1999 through 2011 to estimate the relationship between the Meth Project, an anti-methamphetamine advertising campaign, and meth use among high school students. During this period, a total of eight states adopted anti-meth advertising campaigns. After accounting for pre-existing downward trends in meth use, we find little evidence that the campaign curbed meth use in the full sample. We do find, however, some evidence that the Meth Project may have decreased meth use among White high school students.
Schaffrin, Burkhard
2008-02-01
In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.
Directory of Open Access Journals (Sweden)
Patrick D. SHAW
2010-08-01
Full Text Available Runoff or water yield is an important input to the Steady-State Water Chemistry (SSWC model for estimating critical loads of acidity. Herein, we present site-specific water yield estimates for a large number of lakes (779 across three provinces of western Canada (Manitoba, Saskatchewan, and British Columbia using an isotope mass balance (IMB approach. We explore the impact of applying site-specific hydrology as compared to use of regional runoff estimates derived from gridded datasets in assessing critical loads of acidity to these lakes. In general, the average water yield derived from IMB is similar to the long-term average runoff; however, IMB results suggest a much larger range in hydrological settings of the lakes, attributed to spatial heterogeneity in watershed characteristics and landcover. The comparison of critical loads estimates from the two methods suggests that use of average regional runoff data in the SSWC model may overestimate critical loads for the majority of lakes due to systematic skewness in the actual runoff distributions. Implications for use of site-specific hydrology in regional critical loads assessments across western Canada are discussed.
Estimating adolescent risk for hearing loss based on data from a large school-based survey
I. Vogel (Ineke); H. Verschuure (Hans); C.P.B. van der Ploeg (Catharina); J. Brug (Hans); H. Raat (Hein)
2010-01-01
textabstractObjectives. We estimated whether and to what extent a group of adolescents were at risk of developing permanent hearing loss as a result of voluntary exposure to high-volume music, and we assessed whether such exposure was associated with hearing-related symptoms. Methods. In 2007, 1512
Estimating adolescent risk for hearing loss based on data from a large school-based survey
Vogel, L.; Verschuure, H.; Ploeg, C.P.B. van der; Brug, J.; Raat, H.
2010-01-01
Objectives. We estimated whether and to what extent a group of adolescents were at risk of developing permanent hearing loss as a result of voluntary exposure to high-volume music, and we assessed whether such exposure was associated with hearing-related symptoms. Methods. In 2007, 1512 adolescents
Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M.
2016-12-01
Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in
Lee, S. K.; Fatoyinbo, T. E.; Lagomasino, D.; Osmanoglu, B.; Feliciano, E. A.
2015-12-01
The Digital Terrain Model (DTM) in forest areas is invaluable information for various environmental, hydrological and ecological studies, for example, watershed delineation, vegetation canopy height, water dynamic modeling, forest biomass and carbon estimations. There are few solutions to extract bare-earth Digital Elevation Model information. Airborne lidar systems are widely and successfully used for estimating bare-earth DEMs with centimeter-order accuracy and high spatial resolution. However, expensive cost of operation and small image coverage prevent the use of airborne lidar sensors for large- or global-scale. Although IceSAT/GLAS (Ice, Cloud, and Land Elevation Satellite/Geoscience Laser Altimeter System) lidar data sets have been available for global DTM estimate with relatively lower cost, the large footprint size of 70 m and the interval of 172 m are insufficient for various applications. In this study we propose to extract higher resolution bare-earth DEM over vegetated areas from the combination of interferometric complex coherence from single-pass TanDEM-X (TDX) data at HH polarization and Digital Surface Model (DSM) derived from high-resolution WorldView (WV) images by means of random volume over ground (RVoG) model. The RVoG model is a widely and successfully used model for polarimetric SAR interferometry (Pol-InSAR) forest canopy height inversion. The bare-earth DEM is obtained by complex volume decorrelation in the RVoG model with the DSM estimated by stereo-photogrammetric technique. Forest canopy height can be estimated by subtracting the estimated bare-earth model from the DSM. Finally, the DTM from airborne lidar system was used to validate the bare-earth DEM and forest canopy height estimates.
US Fish and Wildlife Service, Department of the Interior — Protocol for distance sampling surveys of coyotes at the Rocky Mountain Arsenal National Wildlife Refuge (RMA). Line transects are used to estimate the density of...
Diagnosis, prevalence estimation and burden measurement in population surveys of headache
DEFF Research Database (Denmark)
Steiner, Timothy J; Gururaj, Gopalakrishna; Andrée, Colette
2014-01-01
initiatives to improve and standardize methods in use for cross-sectional studies. One requirement is for a survey instrument with proven cross-cultural validity. This report describes the development of such an instrument. Two of the authors developed the initial version, which was used with adaptations...... of headache-attributed burden: symptom burden; health-care utilization; disability and productive time losses; impact on education, career and earnings; perception of control; interictal burden; overall individual burden; effects on relationships and family dynamics; effects on others, including household...
Use of microwave digestion for estimation of heavy metal content of soils in a geochemical survey.
McGrath, D
1998-07-01
A procedure for the rapid and safe analysis of soils with widely differing organic matter contents has been investigated and validated. Surface soils, totalling 295 and sampled on a grid basis, representing 22% of the land-base of the Republic of Ireland, have been analysed for cadmium, chromium, copper, nickel, lead and zinc. Soil concentrations of cadmium, chromium, lead and nickel exhibit patterns of regionalised elevation. Implications of this elevation are considered in relation to sewage sludge application to land, future requirement for baseline surveys and concerns over concentrations in food products.
Total infrared luminosity estimation from local galaxies in AKARI all sky survey
Solarz, A; Pollo, A
2016-01-01
We aim to use the a new and improved version of AKARI all sky survey catalogue of far-infrared sources to recalibrate the formula to derive the total infrared luminosity. We cross-match the faint source catalogue (FSC) of IRAS with the new AKARI-FIS and obtained a sample of 2430 objects. Then we calculate the total infrared (TIR) luminosity $L_{\\textrm{TIR}}$ from the Sanders at al. (1996) formula and compare it with total infrared luminosity from AKARI FIS bands to obtain new coefficients for the general relation to convert FIR luminosity from AKARI bands to the TIR luminosity.
Morehouse, Kim M; Nyman, Patricia J; McNeal, Timothy P; Dinovi, Michael J; Perfetti, Gracia A
2008-03-01
Furan is a suspected human carcinogen that is formed in some processed foods at low ng per g levels. Recent improvements in analytical methodology and scientific instrumentation have made it possible to accurately measure the amount of furan in a wide variety of foods. Results from analysis of more than 300 processed foods are presented. Furan was found at levels ranging from non-detectable (LOD, 0.2-0.9 ng g(-1)) to over 100 ng g(-1). Exposure estimates for several adult food types were calculated, with brewed coffee being the major source of furan in the adult diet (0.15 microg kg(-1) body weight day(-1)). Estimates of mean exposure to furan for different subpopulations were calculated. For consumers 2 years and older, the intake is estimated to be about 0.2 microg kg(-1) body weight day(-1).
Plans-Rubió, Pedro
2012-02-01
The necessary herd immunity blocking the transmission of an infectious agent in the population is established when the prevalence of protected individuals is higher than a critical value, called the herd immunity threshold. The establishment of herd immunity in the population can be determined using the vaccination coverage and seroepidemiological surveys. The vaccination coverage associated with herd immunity (V(c)) can be determined from the herd immunity threshold and vaccine effectiveness. This method requires a vaccine-specific effectiveness evaluation, and it can be used only for the herd immunity assessment of vaccinated communities in which the infectious agent is not circulating. The prevalence of positive serological results associated with herd immunity can be determined from the herd immunity threshold, in terms of prevalence of antibodies (p(c)) and serological test performance. The herd immunity is established when the prevalence of antibodies is higher than pc. This method can be used to assess the establishment of herd immunity in different population groups, both when the infectious agent is circulating and when it is not possible to assess vaccine effectiveness. The herd immunity assessment in Catalonia, Spain, showed that the additional vaccination coverage required to establish herd immunity was 3-6% for measles, mump and varicella and 11% poliovirus type III in school children, 17-59% for diphtheria in youth and adults and 25-46% for persussis in school children, youth and adults.
The Giant Gemini GMOS survey of z>4.4 quasars - I. Measuring the mean free path across cosmic time
Worseck, Gábor; O'Meara, John M; Becker, George D; Ellison, Sara; Lopez, Sebastian; Meiksin, Avery; Ménard, Brice; Murphy, Michael T; Fumagalli, Michele
2014-01-01
We have obtained spectra of 163 quasars at $z_\\mathrm{em}>4.4$ with the Gemini Multi Object Spectrometers on the Gemini North and South telescopes, the largest publicly available sample of high-quality, low-resolution spectra at these redshifts. From this homogeneous data set, we generated stacked quasar spectra in three redshift intervals at $z\\sim 5$. We have modelled the flux below the rest-frame Lyman limit ($\\lambda_\\mathrm{r}<912$\\AA) to assess the mean free path $\\lambda_\\mathrm{mfp}^{912}$ of the intergalactic medium to HI-ionizing radiation. At mean redshifts $z_\\mathrm{q}=4.56$, 4.86 and 5.16, we measure $\\lambda_\\mathrm{mfp}^{912}=(22.2\\pm 2.3, 15.1\\pm 1.8, 10.3\\pm 1.6)h_{70}^{-1}$ proper Mpc with uncertainties dominated by sample variance. Combining our results with $\\lambda_\\mathrm{mfp}^{912}$ measurements from lower redshifts, the data are well modelled by a simple power-law $\\lambda_\\mathrm{mfp}^{912}=A[(1+z)/5]^\\eta$ with $A=(37\\pm 2)h_{70}^{-1}$ Mpc and $\\eta = -5.4\\pm 0.4$ between $z=2.3$...
DEFF Research Database (Denmark)
SURVEY er en udbredt metode og benyttes inden for bl.a. samfundsvidenskab, humaniora, psykologi og sundhedsforskning. Også uden for forskningsverdenen er der mange organisationer som f.eks. konsulentfirmaer og offentlige institutioner samt marketingsafdelinger i private virksomheder, der arbejder...... med surveys. Denne bog gennemgår alle surveyarbejdets faser og giver en praktisk indføring i: • design af undersøgelsen og udvælgelse af stikprøver, • formulering af spørgeskemaer samt indsamling og kodning af data, • metoder til at analysere resultaterne...
Rau, Markus Michael; Hoyle, Ben; Paech, Kerstin; Seitz, Stella
2017-04-01
Photometric redshift uncertainties are a major source of systematic error for ongoing and future photometric surveys. We study different sources of redshift error caused by choosing a suboptimal redshift histogram bin width and propose methods to resolve them. The selection of a too large bin width is shown to oversmooth small-scale structure of the radial distribution of galaxies. This systematic error can significantly shift cosmological parameter constraints by up to 6σ for the dark energy equation-of-state parameter w. Careful selection of bin width can reduce this systematic by a factor of up to 6 as compared with commonly used current binning approaches. We further discuss a generalized resampling method that can correct systematic and statistical errors in cosmological parameter constraints caused by uncertainties in the redshift distribution. This can be achieved without any prior assumptions about the shape of the distribution or the form of the redshift error. Our methodology allows photometric surveys to obtain unbiased cosmological parameter constraints using a minimum number of spectroscopic calibration data. For a DES-like galaxy clustering forecast, we obtain unbiased results with respect to errors caused by suboptimal histogram bin width selection, using only 5k representative spectroscopic calibration objects per tomographic redshift bin.
DEFF Research Database (Denmark)
Nord-Larsen, Thomas; Schumacher, Johannes
2012-01-01
for deciduous forest and negatively biased for coniferous forest. Species type specific (coniferous, deciduous, or mixed forest) models reduced root mean squared error by 3–12% and removed the bias. In application, model predictions will be improved by stratification into deciduous and coniferous forest using e...
[Chickenpox case estimation in acyclovir pharmacy survey and early bioterrorism detection].
Sugawara, Tamie; Ohkusa, Yasushi; Kawanohara, Hirokazu; Taniguchi, Kiyosu; Okabe, Nobuhiko
2011-11-01
Early potential health hazards and bioterrorism threats require early detection. Smallpox cases caused by terrorist could, for example, be treated by prescribing acyclovir to those having fever and vesicle exanthema diagnosed as chicken pox. We have constructed real-time pharmacy surveillance scenarios using information technology (IT) to monitor acyclovir prescription. We collected the number of acyclovir prescriptions from 5138 pharmacies using the Application Server Provider System (ASP) to estimate the number of cases. We then compared the number of those given acyclovir under 15 years old from pharmacy surveillance and sentinel surveillance for chickenpox under the Infection Disease Control Law. The estimated number of under 15 years old prescribed acyclovir in pharmacy surveillance resembled sentinel surveillance results and showed a similar seasonal chickenpox pattern. The correlation coefficient was 0.8575. The estimated numbers of adults, older than 15 but under 65 years old, and elderly, older than 65, prescribed acyclovir showed no clear seasonal pattern. Pharmacy surveillance for acyclovir identified the baseline and can be used to detect unusual chickenpox outbreak. Bioterrorism attack could potentially be detected using smallpox virus when acyclovir prescription for adults suddenly increases without outbreaks in children or the elderly. This acyclovir prescription monitoring such as an application is, to our knowledge, the first of its kind anywhre.
Panigada, Simone; Lauriano, Giancarlo; Donovan, Greg; Pierantonio, Nino; Cañadas, Ana; Vázquez, José Antonio; Burt, Louise
2017-07-01
Systematic, effective monitoring of animal population parameters underpins successful conservation strategy and wildlife management, but it is often neglected in many regions, including much of the Mediterranean Sea. Nonetheless, a series of systematic multispecies aerial surveys was carried out in the seas around Italy to gather important baseline information on cetacean occurrence, distribution and abundance. The monitored areas included the Pelagos Sanctuary, the Tyrrhenian Sea, portions of the Seas of Corsica and Sardinia, the Ionian Seas as well as the Gulf of Taranto. Overall, approximately 48,000 km were flown in either spring, summer and winter between 2009-2014, covering an area of 444,621 km2. The most commonly observed species were the striped dolphin and the fin whale, with 975 and 83 recorded sightings, respectively. Other sighted cetacean species were the common bottlenose dolphin, the Risso's dolphin, the sperm whale, the pilot whale and the Cuvier's beaked whale. Uncorrected model- and design-based estimates of density and abundance for striped dolphins and fin whales were produced, resulting in a best estimate (model-based) of around 95,000 striped dolphins (CV=11.6%; 95% CI=92,900-120,300) occurring in the Pelagos Sanctuary, Central Tyrrhenian and Western Seas of Corsica and Sardinia combined area in summer 2010. Estimates were also obtained for each individual study region and year. An initial attempt to estimate perception bias for striped dolphins is also provided. The preferred summer 2010 uncorrected best estimate (design-based) for the same areas for fin whales was around 665 (CV=33.1%; 95% CI=350-1260). Estimates are also provided for the individual study regions and years. The results represent baseline data to develop efficient, long-term, systematic monitoring programmes, essential to evaluate trends, as required by a number of national and international frameworks, and stress the need to ensure that surveys are undertaken regularly and
Rau, Markus Michael; Paech, Kerstin; Seitz, Stella
2016-01-01
Photometric redshift uncertainties are a major source of systematic error for ongoing and future photometric surveys. We study different sources of redshift error caused by common suboptimal binning techniques and propose methods to resolve them. The selection of a too large bin width is shown to oversmooth small scale structure of the radial distribution of galaxies. This systematic error can significantly shift cosmological parameter constraints by up to $6 \\, \\sigma$ for the dark energy equation of state parameter $w$. Careful selection of bin width can reduce this systematic by a factor of up to 6 as compared with commonly used current binning approaches. We further discuss a generalised resampling method that can correct systematic and statistical errors in cosmological parameter constraints caused by uncertainties in the redshift distribution. This can be achieved without any prior assumptions about the shape of the distribution or the form of the redshift error. Our methodology allows photometric surve...
Energy Technology Data Exchange (ETDEWEB)
Bayliss, Matthew.B. [MIT, MKI; Zengo, Kyle [Colby Coll.; Ruel, Jonathan [Harvard U., Phys. Dept.; Benson, Bradford A. [Fermilab; Bleem, Lindsey E. [Argonne; Bocquet, Sebastian [Argonne; Bulbul, Esra [MIT, MKI; Brodwin, Mark [Missouri U., Kansas City; Capasso, Raffaella [Munich, Tech. U., Universe; Chiu, I-non [Taiwan, Natl. Tsing Hua U.; McDonald, Michael [MIT, MKI; Rapetti, David [NASA, Ames; Saro, Alex [Munich, Tech. U., Universe; Stalder, Brian [Inst. Astron., Honolulu; Stark, Antony A. [Harvard-Smithsonian Ctr. Astrophys.; Strazzullo, Veronica [Munich, Tech. U., Universe; Stubbs, Christopher W. [Harvard-Smithsonian Ctr. Astrophys.; Zenteno, Alfredo [Cerro-Tololo InterAmerican Obs.
2016-12-08
The velocity distribution of galaxies in clusters is not universal; rather, galaxies are segregated according to their spectral type and relative luminosity. We examine the velocity distributions of different populations of galaxies within 89 Sunyaev Zel'dovich (SZ) selected galaxy clusters spanning $ 0.28 < z < 1.08$. Our sample is primarily draw from the SPT-GMOS spectroscopic survey, supplemented by additional published spectroscopy, resulting in a final spectroscopic sample of 4148 galaxy spectra---2868 cluster members. The velocity dispersion of star-forming cluster galaxies is $17\\pm4$% greater than that of passive cluster galaxies, and the velocity dispersion of bright ($m < m^{*}-0.5$) cluster galaxies is $11\\pm4$% lower than the velocity dispersion of our total member population. We find good agreement with simulations regarding the shape of the relationship between the measured velocity dispersion and the fraction of passive vs. star-forming galaxies used to measure it, but we find a small offset between this relationship as measured in data and simulations in which suggests that our dispersions are systematically low by as much as 3\\% relative to simulations. We argue that this offset could be interpreted as a measurement of the effective velocity bias that describes the ratio of our observed velocity dispersions and the intrinsic velocity dispersion of dark matter particles in a published simulation result. Measuring velocity bias in this way suggests that large spectroscopic surveys can improve dispersion-based mass-observable scaling relations for cosmology even in the face of velocity biases, by quantifying and ultimately calibrating them out.
Green, M A; Wright, J C
1985-05-01
It has been clearly demonstrated that the rectal cooling curve does not obey Newton's Law, which is exponential. The first success in modelling rectal cooling mathematically was achieved by Marshall and Hoare [1]. An amendment was made to the simple exponential curve which led to a good mathematical model, exhibiting the three main sections of rectal cooling, i.e. lag, linear and quasi-exponential. The resultant method of postmortem interval estimation required a knowledge of the body mass and height. The present study has led to a totally different amendment to Newton's Law, which provides a means of postmortem interval estimation from body temperature data only. The derivation of the method, with a background on Newton's Law follows.
Directory of Open Access Journals (Sweden)
G. Di Baldassarre
2006-01-01
Full Text Available Several hydrological analyses need to be founded on a reliable estimate of the design storm, which is the expected rainfall depth corresponding to a given duration and probability of occurrence, usually expressed in terms of return period. The annual series of precipitation maxima for storm duration ranging from 15 min to 1 day, observed at a dense network of raingauges sited in northern central Italy, are analyzed using an approach based on L-moments. The analysis investigates the statistical properties of rainfall extremes and detects significant relationships between these properties and the mean annual precipitation (MAP. On the basis of these relationships, we developed a regional model for estimating the rainfall depth for a given storm duration and recurrence interval in any location of the study region. The applicability of the regional model was assessed through Monte Carlo simulations. The uncertainty of the model for ungauged sites was quantified through an extensive cross-validation.
Short GMC lifetimes: an observational estimate with the PdBI Arcsecond Whirlpool Survey (PAWS)
Meidt, Sharon E; Dobbs, Clare L; Pety, Jerome; Thompson, Todd A; Garcia-Burillo, Santiago; Leroy, Adam K; Schinnerer, Eva; Colombo, Dario; Querejeta, Miguel; Kramer, Carsten; Schuster, Karl F; Dumas, Gaelle
2015-01-01
We describe and execute a novel approach to observationally estimate the lifetimes of giant molecular clouds (GMCs). We focus on the cloud population between the two main spiral arms in M51 (the inter-arm region) where cloud destruction via shear and star formation feedback dominates over formation processes. By monitoring the change in GMC number densities and properties from one side of the inter-arm to the other, we estimate the lifetime as a fraction of the inter-arm travel time. We find that GMC lifetimes in M51's inter-arm are finite and short, 20 to 30 Myr. Such short lifetimes suggest that cloud evolution is influenced by environment, in which processes can disrupt GMCs after a few free-fall times. Over most of the region under investigation shear appears to regulate the lifetime. As the shear timescale increases with galactocentric radius, we expect cloud destruction to switch primarily to star formation feedback at larger radii. We identify a transition from shear- to feedback-dominated disruption t...
DEFF Research Database (Denmark)
Brix, Lau; Christoffersen, Christian P. V.; Kristiansen, Martin Søndergaard
of the aorta. Methods: 2D phase contrast flow images of the aorta were acquired from a patient with an enlarged pulmonary artery on a Philips Achieva 1.5T CMR system. The cardiac motion was removed from the data set using the Cornelius/Kanade registration algorithm. The time resolved flow data...... promising because it saves time for post-processing. However, the k-means cluster approach is not comprehensive for quantitative flow estimations as it is but seems feasible for a subsequent segmentation algorithm like deformable contours (i.e. snakes). Future work may overcome this manual part and make...
DEFF Research Database (Denmark)
Sparrevohn, Claus Reedtz; Nielsen, Jan; Storr-Paulsen, Marie
, as all recreational fishermen have to purchase a personal non-transferable and time limited national license before fishing. However, this list will not include those fishing illegally without a license. Therefore, two types of recall surveys with their own questionnaires and group of respondents were...... carried out. The first survey - the license list survey – was carried out once in 2009 and twice in 2010. This survey had a sampling frame corresponding to the list of persons that had purchased a license within the last 12 months. Respondents were asked to provide detailed information on catch and effort...... per ICES area and quarter. In order to also estimate the fraction of fishermen that fished without a valid license, a second survey, called – the Omnibus survey-, was carried out four times. This survey targeted the entire Danish population between 16 and 74 of age...
Almasi, A.; Blom, A.; Heitkönig, I.M.A.; Kpanou, J.B.; Prins, H.H.T.
2001-01-01
A survey of apes was carried out between October 1996 and May 1997 in the Dzanga sector of the Dzanga-Ndoki National Park, Central African Republic (CAR), to estimate gorilla (Gorilla gorilla gorilla) and chimpanzee (Pan troglodytes) densities. The density estimates were based on nest counts. The st
DEFF Research Database (Denmark)
Nielsen, J. Rasmus; Kristensen, Kasper; Lewy, Peter
2014-01-01
Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes...
Directory of Open Access Journals (Sweden)
Kott Phillip S.
2014-09-01
Full Text Available This article describes a two-step calibration-weighting scheme for a stratified simple random sample of hospital emergency departments. The first step adjusts for unit nonresponse. The second increases the statistical efficiency of most estimators of interest. Both use a measure of emergency-department size and other useful auxiliary variables contained in the sampling frame. Although many survey variables are roughly a linear function of the measure of size, response is better modeled as a function of the log of that measure. Consequently the log of size is a calibration variable in the nonresponse-adjustment step, while the measure of size itself is a calibration variable in the second calibration step. Nonlinear calibration procedures are employed in both steps. We show with 2010 DAWN data that estimating variances as if a one-step calibration weighting routine had been used when there were in fact two steps can, after appropriately adjusting the finite-population correct in some sense, produce standard-error estimates that tend to be slightly conservative.
The Single Cigarette Economy in India--a Back of the Envelope Survey to Estimate its Magnitude.
Lal, Pranay; Kumar, Ravinder; Ray, Shreelekha; Sharma, Narinder; Bhattarcharya, Bhaktimay; Mishra, Deepak; Sinha, Mukesh K; Christian, Anant; Rathinam, Arul; Singh, Gurbinder
2015-01-01
Sale of single cigarettes is an important factor for early experimentation, initiation and persistence of tobacco use and a vital factor in the smoking epidemic in India as it is globally. Single cigarettes also promote the sale of illicit cigarettes and neutralises the effect of pack warnings and effective taxation, making tobacco more accessible and affordable to minors. This is the first study to our knowledge which estimates the size of the single stick market in India. In February 2014, a 10 jurisdiction survey was conducted across India to estimate the sale of cigarettes in packs and sticks, by brands and price over a full business day. We estimate that nearly 75% of all cigarettes are sold as single sticks annually, which translates to nearly half a billion US dollars or 30 percent of the India's excise revenues from all cigarettes. This is the price which the consumers pay but is not captured through tax and therefore pervades into an informal economy. Tracking the retail price of single cigarettes is an efficient way to determine the willingness to pay by cigarette smokers and is a possible method to determine the tax rates in the absence of any other rationale.
López-Sanjuan, C; Hernández-Monteagudo, C; Varela, J; Molino, A; Arnalte-Mur, P; Ascaso, B; Castander, F J; Fernández-Soto, A; Huertas-Company, M; Márquez, I; Martínez, V J; Masegosa, J; Moles, M; Pović, M; Aguerri, J A L; Alfaro, E; Benítez, N; Broadhurst, T; Cabrera-Caño, J; Cepa, J; Cerviño, M; Cristóbal-Hornillos, D; Del Olmo, A; Delgado, R M González; Husillos, C; Infante, L; Perea, J; Prada, F; Quintana, J M
2014-01-01
Our goal is to estimate empirically, for the first time, the cosmic variance that affects merger fraction studies based on close pairs. We compute the merger fraction from photometric redshift close pairs with 10h^-1 kpc <= rp <= 50h^-1 kpc and Dv <= 500 km/s, and measure it in the 48 sub-fields of the ALHAMBRA survey. We study the distribution of the measured merger fractions, that follow a log-normal function, and estimate the cosmic variance sigma_v as the intrinsic dispersion of the observed distribution. We develop a maximum likelihood estimator to measure a reliable sigma_v and avoid the dispersion due to the observational errors (including the Poisson shot noise term). The cosmic variance of the merger fraction depends mainly on (i) the number density of the populations under study, both for the principal (n_1) and the companion (n_2) galaxy in the close pair, and (ii) the probed cosmic volume V_c. We find a significant dependence on neither the search radius used to define close companions, t...
Bayliss, Matthew. B.; Zengo, Kyle; Ruel, Jonathan; Benson, Bradford A.; Bleem, Lindsey E.; Bocquet, Sebastian; Bulbul, Esra; Brodwin, Mark; Capasso, Raffaella; Chiu, I.-non; McDonald, Michael; Rapetti, David; Saro, Alex; Stalder, Brian; Stark, Antony A.; Strazzullo, Veronica; Stubbs, Christopher W.; Zenteno, Alfredo
2017-03-01
The velocity distribution of galaxies in clusters is not universal; rather, galaxies are segregated according to their spectral type and relative luminosity. We examine the velocity distributions of different populations of galaxies within 89 Sunyaev Zel’dovich (SZ) selected galaxy clusters spanning 0.28population. We find good agreement with simulations regarding the shape of the relationship between the measured velocity dispersion and the fraction of passive versus star-forming galaxies used to measure it, but we find a small offset between this relationship as measured in data and simulations, which suggests that our dispersions are systematically low by as much as 3% relative to simulations. We argue that this offset could be interpreted as a measurement of the effective velocity bias that describes the ratio of our observed velocity dispersions and the intrinsic velocity dispersion of dark matter particles in a published simulation result. Measuring velocity bias in this way suggests that large spectroscopic surveys can improve dispersion-based mass-observable scaling relations for cosmology even in the face of velocity biases, by quantifying and ultimately calibrating them out.
Pleasants, John M.; Zalucki, Myron P.; Oberhauser, Karen S.; Brower, Lincoln P.; Taylor, Orley R.; Thogmartin, Wayne E.
2017-01-01
To assess the change in the size of the eastern North American monarch butterfly summer population, studies have used long-term data sets of counts of adult butterflies or eggs per milkweed stem. Despite the observed decline in the monarch population as measured at overwintering sites in Mexico, these studies found no decline in summer counts in the Midwest, the core of the summer breeding range, leading to a suggestion that the cause of the monarch population decline is not the loss of Midwest agricultural milkweeds but increased mortality during the fall migration. Using these counts to estimate population size, however, does not account for the shift of monarch activity from agricultural fields to non-agricultural sites over the past 20 years, as a result of the loss of agricultural milkweeds due to the near-ubiquitous use of glyphosate herbicides. We present the counter-hypotheses that the proportion of the monarch population present in non-agricultural habitats, where counts are made, has increased and that counts reflect both population size and the proportion of the population observed. We use data on the historical change in the proportion of milkweeds, and thus monarch activity, in agricultural fields and non-agricultural habitats to show why using counts can produce misleading conclusions about population size. We then separate out the shifting proportion effect from the counts to estimate the population size and show that these corrected summer monarch counts show a decline over time and are correlated with the size of the overwintering population. In addition, we present evidence against the hypothesis of increased mortality during migration. The milkweed limitation hypothesis for monarch decline remains supported and conservation efforts focusing on adding milkweeds to the landscape in the summer breeding region have a sound scientific basis.
SHORT GMC LIFETIMES: AN OBSERVATIONAL ESTIMATE WITH THE PdBI ARCSECOND WHIRLPOOL SURVEY (PAWS)
Energy Technology Data Exchange (ETDEWEB)
Meidt, Sharon E.; Hughes, Annie; Schinnerer, Eva; Colombo, Dario; Querejeta, Miguel [Max-Planck-Institut für Astronomie / Königstuhl 17 D-69117 Heidelberg (Germany); Dobbs, Clare L. [School of Physics and Astronomy, University of Exeter, Stocker Road, Exeter EX4 4QL (United Kingdom); Pety, Jérôme [Institut de Radioastronomie Millimétrique, 300 Rue de la Piscine, F-38406 Saint Martin d’Hères (France); Thompson, Todd A. [Department of Astronomy, The Ohio State University, 140 W. 18th Ave., Columbus, OH 43210 (United States); García-Burillo, Santiago [Observatorio Astronómico Nacional—OAN, Observatorio de Madrid Alfonso XII, 3, E-28014 Madrid (Spain); Leroy, Adam K. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Kramer, Carsten [Instituto Radioastronomía Milimétrica, Av. Divina Pastora 7, Nucleo Central, E-18012 Granada (Spain); Schuster, Karl F.; Dumas, Gaëlle [Observatoire de Paris, 61 Avenue de l’Observatoire, F-75014 Paris (France)
2015-06-10
We describe and execute a novel approach to observationally estimate the lifetimes of giant molecular clouds (GMCs). We focus on the cloud population between the two main spiral arms in M51 (the inter-arm region) where cloud destruction via shear and star formation feedback dominates over formation processes. By monitoring the change in GMC number densities and properties across the inter-arm, we estimate the lifetime as a fraction of the inter-arm travel time. We find that GMC lifetimes in M51's inter-arm are finite and short, 20–30 Myr. Over most of the region under investigation shear appears to regulate the lifetime. As the shear timescale increases with galactocentric radius, we expect cloud destruction to switch primarily to feedback at larger radii. We identify a transition from shear- to feedback-dominated disruption, finding that shear is more efficient at dispersing clouds, whereas feedback transforms the population, e.g., by fragmenting high-mass clouds into lower mass pieces. Compared to the characteristic timescale for molecular hydrogen in M51, our short lifetimes suggest that gas can remain molecular while clouds disperse and reassemble. We propose that galaxy dynamics regulates the cycling of molecular material from diffuse to bound (and ultimately star-forming) objects, contributing to long observed molecular depletion times in normal disk galaxies. We also speculate that, in extreme environments like elliptical galaxies and concentrated galaxy centers, star formation can be suppressed when the shear timescale is short enough that some clouds will not survive to form stars.
Wald, D. J.; Jaiswal, K. S.; Marano, K.; Hearne, M.; Earle, P. S.; So, E.; Garcia, D.; Hayes, G. P.; Mathias, S.; Applegate, D.; Bausch, D.
2010-12-01
The U.S. Geological Survey (USGS) has begun publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses. These estimates should significantly enhance the utility of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system that has been providing estimated ShakeMaps and computing population exposures to specific shaking intensities since 2007. Quantifying earthquake impacts and communicating loss estimates (and their uncertainties) to the public has been the culmination of several important new and evolving components of the system. First, the operational PAGER system now relies on empirically-based loss models that account for estimated shaking hazard, population exposure, and employ country-specific fatality and economic loss functions derived using analyses of losses due to recent and past earthquakes. In some countries, our empirical loss models are informed in part by PAGER’s semi-empirical and analytical loss models, and building exposure and vulnerability data sets, all of which are being developed in parallel to the empirical approach. Second, human and economic loss information is now portrayed as a supplement to existing intensity/exposure content on both PAGER summary alert (available via cell phone/email) messages and web pages. Loss calculations also include estimates of the economic impact with respect to the country’s gross domestic product. Third, in order to facilitate rapid and appropriate earthquake responses based on our probable loss estimates, in early 2010 we proposed a four-level Earthquake Impact Scale (EIS). Instead of simply issuing median estimates for losses—which can be easily misunderstood and misused—this scale provides ranges of losses from which potential responders can gauge expected overall impact from strong shaking. EIS is based on two complementary criteria: the estimated cost of damage, which is most suitable for U
Directory of Open Access Journals (Sweden)
P. De Vita
2012-04-01
Full Text Available In this study, an engineering geological analysis for the assessment of the rock failure susceptibility of a high, steep, rocky coast was developed by means of non-contact geostructural surveys. The methodology was applied to a 6-km coastal cliff located in the Gulf of Tigullio (Northern Tyrrhenian Sea between Rapallo and Chiavari.
The method is based on the geostructural characterisation of outcropping rock masses through meso- and macroscale stereoscopic analyses of digital photos that were taken continuously from a known distance from the coastline. The results of the method were verified through direct surveys of accessible sample areas. The rock failure susceptibility of the coastal sector was assessed by analysing the fundamental rock slope mechanisms of instability and the results were implemented into a Geographic Information System (GIS.
The proposed method is useful for rock failure susceptibility assessments in high, steep, rocky coastal areas, where accessibility is limited due to cliffs or steep slopes. Moreover, the method can be applied to private properties or any other area where a complete and systematic analysis of rock mass structural features cannot be achieved.
Compared to direct surveys and to other non-contact methods based on digital terrestrial photogrammetry, the proposed procedure provided good quality data of the structural features of the rock mass at a low cost. Therefore, the method could be applied to similar coastal areas with a high risk of rock failure occurrence.
Steiner, Timothy J; Gururaj, Gopalakrishna; Andrée, Colette; Katsarava, Zaza; Ayzenberg, Ilya; Yu, Sheng-Yuan; Al Jumah, Mohammed; Tekle-Haimanot, Redda; Birbeck, Gretchen L; Herekar, Arif; Linde, Mattias; Mbewe, Edouard; Manandhar, Kedar; Risal, Ajay; Jensen, Rigmor; Queiroz, Luiz Paulo; Scher, Ann I; Wang, Shuu-Jiun; Stovner, Lars Jacob
2014-01-08
The global burden of headache is very large, but knowledge of it is far from complete and needs still to be gathered. Published population-based studies have used variable methodology, which has influenced findings and made comparisons difficult. The Global Campaign against Headache is undertaking initiatives to improve and standardize methods in use for cross-sectional studies. One requirement is for a survey instrument with proven cross-cultural validity. This report describes the development of such an instrument. Two of the authors developed the initial version, which was used with adaptations in population-based studies in China, Ethiopia, India, Nepal, Pakistan, Russia, Saudi Arabia, Zambia and 10 countries in the European Union. The resultant evolution of this instrument was reviewed by an expert consensus group drawn from all world regions. The final output was the Headache-Attributed Restriction, Disability, Social Handicap and Impaired Participation (HARDSHIP) questionnaire, designed for application by trained lay interviewers. HARDSHIP is a modular instrument incorporating demographic enquiry, diagnostic questions based on ICHD-3 beta criteria, and enquiries into each of the following as components of headache-attributed burden: symptom burden; health-care utilization; disability and productive time losses; impact on education, career and earnings; perception of control; interictal burden; overall individual burden; effects on relationships and family dynamics; effects on others, including household partner and children; quality of life; wellbeing; obesity as a comorbidity. HARDSHIP already has demonstrated validity and acceptability in multiple languages and cultures. Modules may be included or not, and others (e.g., on additional comorbidities) added, according to the purpose of the study and resources (especially time) available.
Walker, Kate; Seaman, Shaun R; De Angelis, Daniela; Presanis, Anne M; Dodds, Julie P; Johnson, Anne M; Mercey, Danielle; Gill, O Noel; Copas, Andrew J
2011-10-01
Hard-to-reach population subgroups are typically investigated using convenience sampling, which may give biased estimates. Combining information from such surveys, a probability survey and clinic surveillance, can potentially minimize the bias. We developed a methodology to estimate the prevalence of undiagnosed HIV infection among men who have sex with men (MSM) in England and Wales aged 16-44 years in 2003, making fuller use of the available data than earlier work. We performed a synthesis of three data sources: genitourinary medicine clinic surveillance (11 380 tests), a venue-based convenience survey including anonymous HIV testing (3702 MSM) and a general population sexual behaviour survey (134 MSM). A logistic regression model to predict undiagnosed infection was fitted to the convenience survey data and then applied to the MSMs in the population survey to estimate the prevalence of undiagnosed infection in the general MSM population. This estimate was corrected for selection biases in the convenience survey using clinic surveillance data. A sensitivity analysis addressed uncertainty in our assumptions. The estimated prevalence of undiagnosed HIV in MSM was 2.4% [95% confidence interval (95% CI 1.7-3.0%)], and between 1.6% (95% CI 1.1-2.0%) and 3.3% (95% CI 2.4-4.1%) depending on assumptions; corresponding to 5500 (3390-7180), 3610 (2180-4740) and 7570 (4790-9840) men, and undiagnosed fractions of 33, 24 and 40%, respectively. Our estimates are consistent with earlier work that did not make full use of data sources. Reconciling data from multiple sources, including probability-, clinic- and venue-based convenience samples can reduce bias in estimates. This methodology could be applied in other settings to take full advantage of multiple imperfect data sources.
Jansen, Rob T P; Laeven, Mark; Kardol, Wim
2002-06-01
The analytical processes in clinical laboratories should be considered to be non-stationary, non-ergodic and probably non-stochastic processes. Both the process mean and the process standard deviation vary. The variation can be different at different levels of concentration. This behavior is shown in five examples of different analytical systems: alkaline phosphatase on the Hitachi 911 analyzer (Roche), vitamin B12 on the Access analyzer (Beckman), prothrombin time and activated partial thromboplastin time on the STA Compact analyzer (Roche) and PO2 on the ABL 520 analyzer (Radiometer). A model is proposed to assess the status of a process. An exponentially weighted moving average and standard deviation was used to estimate process mean and standard deviation. Process means were estimated overall and for each control level. The process standard deviation was estimated in terms of within-run standard deviation. Limits were defined in accordance with state of the art- or biological variance-derived cut-offs. The examples given are real, not simulated, data. Individual control sample results were normalized to a target value and target standard deviation. The normalized values were used in the exponentially weighted algorithm. The weighting factor was based on a process time constant, which was estimated from the period between two calibration or maintenance procedures. The proposed system was compared with Westgard rules. The Westgard rules perform well, despite the underlying presumption of ergodicity. This is mainly caused by the introduction of the starting rule of 12s, which proves essential to prevent a large number of rule violations. The probability of reporting a test result with an analytical error that exceeds the total allowable error was calculated for the proposed system as well as for the Westgard rules. The proposed method performed better. The proposed algorithm was implemented in a computer program running on computers to which the analyzers were
Hamchevici, Carmen; Udrea, Ion
2013-11-01
The concept of basin-wide Joint Danube Survey (JDS) was launched by the International Commission for the Protection of the Danube River (ICPDR) as a tool for investigative monitoring under the Water Framework Directive (WFD), with a frequency of 6 years. The first JDS was carried out in 2001 and its success in providing key information for characterisation of the Danube River Basin District as required by WFD lead to the organisation of the second JDS in 2007, which was the world's biggest river research expedition in that year. The present paper presents an approach for improving the survey strategy for the next planned survey JDS3 (2013) by means of several multivariate statistical techniques. In order to design the optimum structure in terms of parameters and sampling sites, principal component analysis (PCA), factor analysis (FA) and cluster analysis were applied on JDS2 data for 13 selected physico-chemical and one biological element measured in 78 sampling sites located on the main course of the Danube. Results from PCA/FA showed that most of the dataset variance (above 75%) was explained by five varifactors loaded with 8 out of 14 variables: physical (transparency and total suspended solids), relevant nutrients (N-nitrates and P-orthophosphates), feedback effects of primary production (pH, alkalinity and dissolved oxygen) and algal biomass. Taking into account the representation of the factor scores given by FA versus sampling sites and the major groups generated by the clustering procedure, the spatial network of the next survey could be carefully tailored, leading to a decreasing of sampling sites by more than 30%. The approach of target oriented sampling strategy based on the selected multivariate statistics can provide a strong reduction in dimensionality of the original data and corresponding costs as well, without any loss of information.
Blackmon, Heath; Demuth, Jeffery P
2016-02-01
The pace and direction of evolution in response to selection, drift, and mutation are governed by the genetic architecture that underlies trait variation. Consequently, much of evolutionary theory is predicated on assumptions about whether genes can be considered to act in isolation, or in the context of their genetic background. Evolutionary biologists have disagreed, sometimes heatedly, over which assumptions best describe evolution in nature. Methods for estimating genetic architectures that favor simpler (i.e., additive) models contribute to this debate. Here we address one important source of bias, model selection in line cross analysis (LCA). LCA estimates genetic parameters conditional on the best model chosen from a vast model space using relatively few line means. Current LCA approaches often favor simple models and ignore uncertainty in model choice. To address these issues we introduce Software for Analysis of Genetic Architecture (SAGA), which comprehensively assesses the potential model space, quantifies model selection uncertainty, and uses model weighted averaging to accurately estimate composite genetic effects. Using simulated data and previously published LCA studies, we demonstrate the utility of SAGA to more accurately define the components of complex genetic architectures, and show that traditional approaches have underestimated the importance of epistasis.
Mokdad, Fatiha; Haddad, Boualem
2017-06-01
In this paper, two new infrared precipitation estimation approaches based on the concept of k-means clustering are first proposed, named the NAW-Kmeans and the GPI-Kmeans methods. Then, they are adapted to the southern Mediterranean basin, where the subtropical climate prevails. The infrared data (10.8 μm channel) acquired by MSG-SEVIRI sensor in winter and spring 2012 are used. Tests are carried out in eight areas distributed over northern Algeria: Sebra, El Bordj, Chlef, Blida, Bordj Menael, Sidi Aich, Beni Ourthilane, and Beni Aziz. The validation is performed by a comparison of the estimated rainfalls to rain gauges observations collected by the National Office of Meteorology in Dar El Beida (Algeria). Despite the complexity of the subtropical climate, the obtained results indicate that the NAW-Kmeans and the GPI-Kmeans approaches gave satisfactory results for the considered rain rates. Also, the proposed schemes lead to improvement in precipitation estimation performance when compared to the original algorithms NAW (Nagri, Adler, and Wetzel) and GPI (GOES Precipitation Index).
Kojić, Nevena Eremić; Derić, Mirjana; Dejanović, Jadranka
2014-01-01
This study was done in order to evaluate the effect of serum levels of total cholesterol, triglycerides, low-density lipoprotein-cholesterol and high-density lipoprotein-cholesterol on 10-year coronary heart disease risk distribution change. This study included 110 subjects of both genders (71 female and 39 male), aged 29 to 73, treated at the Outpatient Department of Atherosclerosis Prevention, Centre for Laboratory Medicine, Clinical Centre Vojvodina. The 10-year coronary heart disease risk was estimated on first examination and after one-year treatment by means of Framingham, PROCAM and SCORE coronary risk scores and their modifications (Framingham Adult Treatment Panel III, Framingham Weibul, PROCAM NS and PROCAM Cox Hazards). Age, gender, systolic and diastolic blood pressure, smoking, positive family history and left ventricular hypertrophy are risk factors involved in the estimation of coronary heart disease besides lipid parameters. There were no significant differences in nutritional status, smoking habits, systolic and diastolic pressure, and no development of diabetes mellitus or cardiovascular incidents during one-year follow. However, a significant reduction in cholesterol level (p risk (Framingham- p Framingham ATP III- p Framingham Weibul- p SCORE- p risk category (Framingham- p Framingham ATP III- p Framingham Weibul- p SCORE- p risk at the beginning of the study. Our results show that the correction of lipid level after one-year treatment leads to a significant redistribution of 10-year coronary heart disease risk estimated by means of seven different coronary risk scores. This should stimulate patients and doctors to persist in prevention measures.
Chari, Amalavoyal V; Engberg, John; Ray, Kristin N; Mehrotra, Ateev
2015-06-01
To provide nationally representative estimates of the opportunity costs of informal elder-care in the United States. Data from the 2011 and 2012 American Time Use Survey. Wage is used as the measure of an individual's value of time (opportunity cost), with wages being imputed for nonworking individuals using a selection-corrected regression methodology. The total opportunity costs of informal elder-care amount to $522 billion annually, while the costs of replacing this care by unskilled and skilled paid care are $221 billion and $642 billion, respectively. Informal caregiving remains a significant phenomenon in the United States with a high opportunity cost, although it remains more economical (in the aggregate) than skilled paid care. © Health Research and Educational Trust.
Energy Technology Data Exchange (ETDEWEB)
Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran O.; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer [Department of Particle Physics and Astrophysics, Faculty of Physics, The Weizmann Institute of Science, Rehovot 76100 (Israel); Kulkarni, Shrinivas R.; Kasliwal, Mansi M. [Cahill Center for Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Ben-Ami, Sagi [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Ctr. for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Chelouche, Doron; Rafter, Stephen [Physics Department, Faculty of Natural Sciences, University of Haifa, 31905 Haifa (Israel); Behar, Ehud; Laor, Ari [Physics Department, Technion Israel Institute of Technology, 32000 Haifa (Israel); Poznanski, Dovi; Nakar, Ehud; Maoz, Dan [School of Physics and Astronomy, Tel Aviv University, 69978 Tel Aviv (Israel); Trakhtenbrot, Benny [Institute for Astronomy, ETH Zurich, Wolfgang-Pauli-Strasse 27 Zurich 8093 (Switzerland); Neill, James D.; Barlow, Thomas A.; Martin, Christofer D., E-mail: noam.ganot@gmail.com [California Institute of Technology, 1200 East California Boulevard, MC 278-17, Pasadena, CA 91125 (United States); Collaboration: ULTRASAT Science Team; WTTH consortium; GALEX Science Team; Palomar Transient Factory; and others
2016-03-20
The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R{sub ⊙}, explosion energies of 10{sup 51} erg, and ejecta masses of 10 M{sub ⊙}. Exploding blue supergiants and Wolf–Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (∼0.5 SN per deg{sup 2}), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.
Energy Technology Data Exchange (ETDEWEB)
Belzer, David B.
2004-09-04
This report examines measurement issues related to the amount of electricity used by the commercial sector in the U.S. and the implications for historical trends of commercial building electricity intensity (kWh/sq. ft. of floor space). The report compares two (Energy Information Administration) sources of data related to commercial buildings: the Commercial Building Energy Consumption Survey (CBECS) and the reporting by utilities of sales to commercial customers (survey Form-861). Over past two decades these sources suggest significantly different trend rates of growth of electricity intensity, with the supply (utility)-based estimate growing much faster than that based only upon the CBECS. The report undertakes various data adjustments in an attempt to rationalize the differences between these two sources. These adjustments deal with: 1) periodic reclassifications of industrial vs. commercial electricity usage at the state level and 2) the amount of electricity used by non-enclosed equipment (non-building use) that is classified as commercial electricity sales. In part, after applying these adjustments, there is a good correspondence between the two sources over the the past four CBECS (beginning with 1992). However, as yet, there is no satisfactory explanation of the differences between the two sources for longer periods that include the 1980s.
Bassani, Diego Garcia; Padoin, Cintia Vontobel; Veldhuizen, Scott
2008-11-01
Children exposed to parental psychiatric disorders have an increased risk of several psychiatric disorders, impaired development, behavioural problems, injuries, physical illness and mortality. Even though this high-risk group has been shown to benefit from health promotion and preventive interventions, estimates of the size of the population at risk are not available. Estimating the number of exposed children using adult survey data will likely generate valuable information for health policy, planning, and advocacy. In this paper, the authors present a method to indirectly estimate the size of this population using secondary data. A Canadian adult health survey and the Census were combined to estimate the prevalence of exposure of children less than 12 years to parental and non-parental psychiatric disorders. A method to combine census and survey data is presented and tested under varying degrees of data availability. Results are compared to the actual number of children exposed to parental psychiatric disorders and discussed. The most accurate estimates were obtained when the most complete survey was combined with relatively detailed census information. Incomplete survey simulations produced substantial underestimates of the prevalence of exposure even when combined with detailed census information.
Cisternas, Miriam G.; Murphy, Louise; Sacks, Jeffrey J.; Solomon, Daniel H.; Pasta, David J.; Helmick, Charles G.
2015-01-01
Objective Provide a contemporary estimate of osteoarthritis (OA) by comparing accuracy and prevalence of alternative definitions of OA. Methods The Medical Expenditure Panel Survey (MEPS) household component (HC) records respondent-reported medical conditions as open-ended responses; professional coders translate these responses into ICD-9-CM codes for the medical conditions files. Using these codes and other data from the MEPS-HC medical conditions files, we constructed three case definitions of OA and assessed them against medical provider diagnoses of ICD-9-CM 715 [osteoarthrosis and allied disorders] in a MEPS subsample. The three definitions were: 1) strict = ICD-9-CM 715; 2) expanded = ICD-9-CM 715, 716 [other and unspecified arthropathies], OR 719 [other and unspecified disorders of joint]); and 3) probable = strict OR expanded + respondent-reported prior diagnosis of OA or other arthritis excluding rheumatoid arthritis (RA). Results Sensitivity and specificity of the three definitions were: strict – 34.6% and 97.5%; expanded – 73.8% and 90.5%; and probable – 62.9% and 93.5%. Conclusion The strict definition for OA (ICD-9-CM 715) excludes many individuals with OA. The probable definition of OA has the optimal combination of sensitivity and specificity relative to the two other MEPS-based definitions and yields a national annual estimate of 30.8 million adults with OA (13.4% of US adult population) for 2008 – 2011. PMID:26315529
Roberts, Greg; Bryant, Diane
2012-01-01
This study used data from the Early Childhood Longitudinal Survey, Kindergarten Class of 1998 –1999, to (a) estimate mathematics achievement trends through 5th grade in the population of students who are English-language proficient by the end of kindergarten, (b) compare trends across primary language groups within this English-language proficient group, (c) evaluate the effect of low socioeconomic status (SES) for English-language proficient students and within different primary language groups, and (d) estimate language-group trends in specific mathematics skill areas. The group of English-language proficient English-language learners (ELLs) was disaggregated into native Spanish speakers and native speakers of Asian languages, the 2 most prevalent groups of ELLs in the United States. Results of multilevel latent variable growth modeling suggest that primary language may be less salient than SES in explaining the mathematics achievement of English-language proficient ELLs. The study also found that mathematics-related school readiness is a key factor in explaining subsequent achievement differences and that the readiness gap is prevalent across the range of mathematics-related skills. PMID:21574702
Roberts, Greg; Bryant, Diane
2011-07-01
This study used data from the Early Childhood Longitudinal Survey, Kindergarten Class of 1998-1999, to (a) estimate mathematics achievement trends through 5th grade in the population of students who are English-language proficient by the end of kindergarten, (b) compare trends across primary language groups within this English-language proficient group, (c) evaluate the effect of low socioeconomic status (SES) for English-language proficient students and within different primary language groups, and (d) estimate language-group trends in specific mathematics skill areas. The group of English-language proficient English-language learners (ELLs) was disaggregated into native Spanish speakers and native speakers of Asian languages, the 2 most prevalent groups of ELLs in the United States. Results of multilevel latent variable growth modeling suggest that primary language may be less salient than SES in explaining the mathematics achievement of English-language proficient ELLs. The study also found that mathematics-related school readiness is a key factor in explaining subsequent achievement differences and that the readiness gap is prevalent across the range of mathematics-related skills.
Del Gobbo, Costanza; Colucci, Renato R.; Forte, Emanuele; Triglav Čekada, Michaela; Zorn, Matija
2016-08-01
It is well known that small glaciers of mid latitudes and especially those located at low altitude respond suddenly to climate changes both on local and global scale. For this reason their monitoring as well as evaluation of their extension and volume is essential. We present a ground penetrating radar (GPR) dataset acquired on September 23 and 24, 2013 on the Triglav glacier to identify layers with different characteristics (snow, firn, ice, debris) within the glacier and to define the extension and volume of the actual ice. Computing integrated and interpolated 3D using the whole GPR dataset, we estimate that at the moment of data acquisition the ice area was 3800 m2 and the ice volume 7400 m3. Its average thickness was 1.95 m while its maximum thickness was slightly more than 5 m. Here we compare the results with a previous GPR survey acquired in 2000. A critical review of the historical data to find the general trend and to forecast a possible evolution is also presented. Between 2000 and 2013, we observed relevant changes in the internal distribution of the different units (snow, firn, ice) and the ice volume reduced from about 35,000 m3 to about 7400 m3. Such result can be achieved only using multiple GPR surveys, which allow not only to assess the volume occupied by a glacial body, but also to image its internal structure and the actual ice volume. In fact, by applying one of the widely used empirical volume-area relations to infer the geometrical parameters of the glacier, a relevant underestimation of ice-loss would be achieved.
Directory of Open Access Journals (Sweden)
Laura Gosoniu
Full Text Available A national HIV/AIDS and malaria parasitological survey was carried out in Tanzania in 2007-2008. In this study the parasitological data were analyzed: i to identify climatic/environmental, socio-economic and interventions factors associated with child malaria risk and ii to produce a contemporary, high spatial resolution parasitaemia risk map of the country. Bayesian geostatistical models were fitted to assess the association between parasitaemia risk and its determinants. bayesian kriging was employed to predict malaria risk at unsampled locations across Tanzania and to obtain the uncertainty associated with the predictions. Markov chain Monte Carlo (MCMC simulation methods were employed for model fit and prediction. Parasitaemia risk estimates were linked to population data and the number of infected children at province level was calculated. Model validation indicated a high predictive ability of the geostatistical model, with 60.00% of the test locations within the 95% credible interval. The results indicate that older children are significantly more likely to test positive for malaria compared with younger children and living in urban areas and better-off households reduces the risk of infection. However, none of the environmental and climatic proxies or the intervention measures were significantly associated with the risk of parasitaemia. Low levels of malaria prevalence were estimated for Zanzibar island. The population-adjusted prevalence ranges from 0.29% in Kaskazini province (Zanzibar island to 18.65% in Mtwara region. The pattern of predicted malaria risk is similar with the previous maps based on historical data, although the estimates are lower. The predicted maps could be used by decision-makers to allocate resources and target interventions in the regions with highest burden of malaria in order to reduce the disease transmission in the country.
Directory of Open Access Journals (Sweden)
M. Salvia
2011-08-01
Full Text Available This paper describes a procedure to estimate both the fraction of flooded area and the mean water level in vegetated river floodplains by using a synergy of active and passive microwave signatures. In particular, C band Envisat ASAR in Wide Swath mode and AMSR-E at X, Ku and Ka band, are used. The method, which is an extension of previously developed algorithms based on passive data, exploits also model simulations of vegetation emissivity. The procedure is applied to a long flood event which occurred in the Paraná River Delta from December 2009 to April 2010. Obtained results are consistent with in situ measurements of river water level.
Directory of Open Access Journals (Sweden)
M. Salvia
2011-03-01
Full Text Available This paper describes a procedure to estimate both the fraction of flooded area and the mean water level in vegetated river floodplains by using a synergy of active and passive microwave signatures. In particular, C Band Envisat ASAR in Wide Scan mode and AMSR-E at X, Ku and Ka Band, are used. The method, which is an extension of previously developed algorithms based on passive data, exploits also model simulations of vegetation emissivity. The procedure is applied to a long flood event which occurred in the Paraná River Delta from December 2009 to April 2010. Obtained results are consistent with in situ measurements of river water level.
Directory of Open Access Journals (Sweden)
Yun Hwi Park
Full Text Available In evaluating hearing disability in medicolegal work, the apportionment of age- and gender-related sensorineural hearing loss should be considered as a prior factor, especially for the elderly. However, in the literature written in the English language no studies have reported on the age- and gender-related mean hearing threshold for the South Korean population.This study aimed to identify the mean hearing thresholds in the South Korean population to establish reference data and to identify the age- and gender-related characteristics.This study is based on the Korea National Health and Nutrition Examination Survey (KNHANES 2010-2012, which was conducted by the Korean government, the data of which was disclosed to the public. A total of 15,606 participants (unweighted representing 33,011,778 Koreans (weighted with normal tympanic membrane and no history of regular or occupational noise exposure were selected and analyzed in this study. The relationship between the hearing threshold level and frequency, age, and gender was investigated and analyzed in a highly-screened population by considering the sample weights of a complex survey design.A gender ratio difference was found between the unweighted and the weighted designs: male:female, 41.0%: 59.0% (unweighted, participants vs. 47.2%:52.8% (weighted, representing population. As age increased, the hearing threshold increased for all frequencies. Hearing thresholds of 3 kHz, 4 kHz, and 6 kHz showed a statistical difference between both genders for people older than 30, with the 4 kHz frequency showing the largest difference. This paper presents details about the mean hearing threshold based on age and gender.The data from KNHANES 2010-2012 showed gender differences at hearing thresholds of 3 kHz, 4 kHz, and 6 kHz in a highly-screened population. The most significant gender difference in relation to hearing threshold was observed at 4 kHz. The hearing thresholds at all of the tested frequencies
Park, Yun Hwi; Shin, Seung-Ho; Byun, Sung Wan; Kim, Ju Yeon
2016-01-01
In evaluating hearing disability in medicolegal work, the apportionment of age- and gender-related sensorineural hearing loss should be considered as a prior factor, especially for the elderly. However, in the literature written in the English language no studies have reported on the age- and gender-related mean hearing threshold for the South Korean population. This study aimed to identify the mean hearing thresholds in the South Korean population to establish reference data and to identify the age- and gender-related characteristics. This study is based on the Korea National Health and Nutrition Examination Survey (KNHANES) 2010-2012, which was conducted by the Korean government, the data of which was disclosed to the public. A total of 15,606 participants (unweighted) representing 33,011,778 Koreans (weighted) with normal tympanic membrane and no history of regular or occupational noise exposure were selected and analyzed in this study. The relationship between the hearing threshold level and frequency, age, and gender was investigated and analyzed in a highly-screened population by considering the sample weights of a complex survey design. A gender ratio difference was found between the unweighted and the weighted designs: male:female, 41.0%: 59.0% (unweighted, participants) vs. 47.2%:52.8% (weighted, representing population). As age increased, the hearing threshold increased for all frequencies. Hearing thresholds of 3 kHz, 4 kHz, and 6 kHz showed a statistical difference between both genders for people older than 30, with the 4 kHz frequency showing the largest difference. This paper presents details about the mean hearing threshold based on age and gender. The data from KNHANES 2010-2012 showed gender differences at hearing thresholds of 3 kHz, 4 kHz, and 6 kHz in a highly-screened population. The most significant gender difference in relation to hearing threshold was observed at 4 kHz. The hearing thresholds at all of the tested frequencies worsened with
Kawakami, H
2003-01-01
On 100 isobars from 72 to 171 mass number, the radiation strength, dose equivalent and mean gamma-ray energy from p+ sup 2 sup 3 sup 8 U fission products at Tandem accelerator facility were estimated on the basis of data of proton induced fission mass yield by T. Tsukada. In order to control radiation, the decay curves of radiation of each mass after irradiation were estimated and illustrated. These calculation results showed 1) the peak of p+ sup 2 sup 3 sup 8 U fission products is 101 and 133 mass number. 2) gamma-ray strength of target ion source immediately after irradiation is 3.12x10 sup 1 sup 1 (Radiation/s) when it repeated 4 cycles of UC sub 2 (2.6 g/cm sup 2) target radiated by 30 MeV and 3 mu A proton for 5 days and then cooled for 2 days. It decreased to 3.85x10 sup 1 sup 0 and 6.7x10 sup 9 (Radiation/s) after one day and two weeks cooling, respectively. 3) Total dose equivalent is 3.8x10 sup 4 (mu S/h) at 1 m distance without shield. 4) There are no problems on control the following isobars, beca...
Dong, S L; Chu, T C; Lee, J S; Lan, G Y; Wu, T H; Yeh, Y H; Hwang, J J
2002-12-01
Estimation of mean-glandular dose (MGD) has been investigated in recent years due to the potential risks of radiation-induced carcinogenesis associated with the mammographic examination for diagnostic radiology. In this study, a new technique for immediate readout of breast entrance skin air kerma (BESAK) using high sensitivity MOSFET dosimeter after mammographic projection was introduced and a formula for the prediction of tube output with exposure records was developed. A series of appropriate conversion factors was applied to the MGD determination from the BESAK. The study results showed that signal response of the high sensitivity MOSFET exhibited excellent linearity within mammographic dose ranges, and that the energy dependence was less than 3% for each anode/filter combination at the tube potentials 25-30 kV. Good agreement was observed between the BESAK and the tube exposure output measurement for breasts thicker than 30 mm. In addition, the air kerma estimated from our prediction formula provided sufficient accuracy for thinner breasts. The average MGD from 120 Asian females was 1.5 mGy, comparable to other studies. Our results suggest that the high sensitivity MOSFET dosimeter system is a good candidate for immediately readout of BESAK after mammographic procedures.
Directory of Open Access Journals (Sweden)
Anne K. Galgon, Patricia A. Shewokis
2016-03-01
Full Text Available The objectives of this communication are to present the methods used to calculate mean absolute relative phase (MARP, deviation phase (DP and point estimate relative phase (PRP and compare their utility in measuring postural coordination during the performance of a serial reaching task. MARP and DP are derived from continuous relative phase time series representing the relationship between two body segments or joints during movements. MARP is a single measure used to quantify the coordination pattern and DP measures the stability of the coordination pattern. PRP also quantifies coordination patterns by measuring the relationship between the timing of maximal or minimal angular displacements of two segments within cycles of movement. Seven young adults practiced a bilateral serial reaching task 300 times over 3 days. Relative phase measures were used to evaluate inter-joint relationships for shoulder-hip (proximal and hip-ankle (distal postural coordination at early and late learning. MARP, PRP and DP distinguished between proximal and distal postural coordination. There was no effect of practice on any of the relative phase measures for the group, but individual differences were seen over practice. Combined, MARP and DP estimated stability of in-phase and anti-phase postural coordination patterns, however additional qualitative movement analyses may be needed to interpret findings in a serial task. We discuss the strengths and limitations of using MARP and DP and compare MARP and DP to PRP measures in assessing coordination patterns in the context of various types of skillful tasks.
Irusta, Unai; Ruiz, Jesús; de Gauna, Sofía Ruiz; Eftestøl, Trygve; Kramer-Johansen, Jo
2009-04-01
Cardiopulmonary resuscitation (CPR) artifacts caused by chest compressions and ventilations interfere with the rhythm diagnosis of automated external defibrillators (AED). CPR must be interrupted for a reliable diagnosis. However, pauses in chest compressions compromise the defibrillation success rate and reduce perfusion of vital organs. The removal of the CPR artifacts would enable compressions to continue during AED rhythm analysis, thereby increasing the likelihood of resuscitation success. We have estimated the CPR artifact using only the frequency of the compressions as additional information to model it. Our model of the artifact is adaptively estimated using a least mean-square (LMS) filter. It was tested on 89 shockable and 292 nonshockable ECG samples from real out-of-hospital sudden cardiac arrest episodes. We evaluated the results using the shock advice algorithm of a commercial AED. The sensitivity and specificity were above 95% and 85%, respectively, for a wide range of working conditions of the LMS filter. Our results show that the CPR artifact can be accurately modeled using only the frequency of the compressions. These can be easily registered after small changes in the hardware of the CPR compression pads.
NSGIC State | GIS Inventory — Geodetic Control Points dataset current as of 1995. Benchmarks; Vertical elevation bench marks for monumented geodetic survey control points for which mean sea level...
Directory of Open Access Journals (Sweden)
O. POPOV
2015-10-01
Full Text Available The article describes the solution of one of the most important problems of rational use of natural resources. Modern mathematical tools for damage estimation, which are caused by atmospheric pollution to natural objects, and also methods for calculation of the cost for their renewal were developed. The solution of problem was divided into 3 stages. At the first stage it was defined basic anthropogenic sources of pollution, was illustrated conceptual behavior of pollutants in the atmosphere emitted by technological stationary point source. Choice of mathematical model that allows to determine the distribution of pollutants concentration in the air in zones of pollution by point stationary sources in the short-term discharges was proved. At the second stage it was developed mathematical tools to determine the level of objects damage, which were in the zone of pollution, depending on the intensity and duration of exposure of technogenic sources. At the third stage it was developed mathematical models to determine the recoverable amount of natural objects depending on their level of damage. Model example of developed means usage was described. Advantages of developed means over existed analogs were noticed.
Age estimation by facial image: a survey%人脸图像的年龄估计技术研究
Institute of Scientific and Technical Information of China (English)
王先梅; 梁玲燕; 王志良; 胡四泉
2012-01-01
Age information, as an important personal trait, has great potential in safety surveillance, human-computer interaction, multimedia applications, and face recognition. As an emerging biometric information identification technology, face-image based age estimation has gained great attention resently and has become one of the important research topics in machine learning and computer vision. In this paper, we survey most existing commonly used methods in face-image based age estimation, especially focusing on the extraction of age features and classification. Then, we also introduce some face aging databases and evaluation protocols, which are widely used at present. Based on these databases and evaluation methods, a comparison of the performances of several age estimation systems is presented. Finally, the challenges and promising directions of age estimation techniques are discussed.%年龄信息作为人体的一种重要生物特征,在安全监控、人机交互、视频检索等领域有着巨大的应用潜力,并且是人脸识别技术的主要瓶颈问题之一.基于人脸图像的年龄估计技术作为一种新兴的生物特征识别技术,目前已经成为计算机视觉、人机交互等领域的一个重要研究课题.为此对国内外近几年来在年龄估计技术方面的发展情况进行了综述,主要包括年龄特征提取与年龄分类模式两大部分.同时对常用的年龄数据库、性能评价指标进行了总结,并在此基础上对当前的一些年龄估计系统的性能进行了对比.最后,对基于人脸图像的年龄估计技术所面临的挑战以及可能的发展方向进行了讨论.
López-Sanjuan, C.; Cenarro, A. J.; Hernández-Monteagudo, C.; Varela, J.; Molino, A.; Arnalte-Mur, P.; Ascaso, B.; Castander, F. J.; Fernández-Soto, A.; Huertas-Company, M.; Márquez, I.; Martínez, V. J.; Masegosa, J.; Moles, M.; Pović, M.; Aguerri, J. A. L.; Alfaro, E.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; Del Olmo, A.; González Delgado, R. M.; Husillos, C.; Infante, L.; Perea, J.; Prada, F.; Quintana, J. M.
2014-04-01
Aims: Our goal is to estimate empirically the cosmic variance that affects merger fraction studies based on close pairs for the first time. Methods: We compute the merger fraction from photometric redshift close pairs with 10 h-1 kpc ≤ rp ≤ 50 h-1 kpc and Δv ≤ 500 km s-1 and measure it in the 48 sub-fields of the ALHAMBRA survey. We study the distribution of the measured merger fractions that follow a log-normal function and estimate the cosmic variance σv as the intrinsic dispersion of the observed distribution. We develop a maximum likelihood estimator to measure a reliable σv and avoid the dispersion due to the observational errors (including the Poisson shot noise term). Results: The cosmic variance σv of the merger fraction depends mainly on (i) the number density of the populations under study for both the principal (n1) and the companion (n2) galaxy in the close pair and (ii) the probed cosmic volume Vc. We do not find a significant dependence on either the search radius used to define close companions, the redshift, or the physical selection (luminosity or stellar mass) of the samples. Conclusions: We have estimated the cosmic variance that affects the measurement of the merger fraction by close pairs from observations. We provide a parametrisation of the cosmic variance with n1, n2, and Vc, σv ∝ n1-0.54Vc-0.48 (n_2/n_1)-0.37 . Thanks to this prescription, future merger fraction studies based on close pairs could properly account for the cosmic variance on their results. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie (MPIA) at Heidelberg and the Instituto de Astrofísica de Andalucía (IAA-CSIC).Appendix is available in electronic form at http://www.aanda.org
Tolonen, Hanna; Koponen, Päivikki; Mindell, Jennifer S; Männistö, Satu; Giampaoli, Simona; Dias, Carlos Matias; Tuovinen, Tarja; Göβwald, Antje; Kuulasmaa, Kari
2014-12-01
Non-communicable diseases (NCDs) cause 63% of deaths worldwide. The leading NCD risk factor is raised blood pressure, contributing to 13% of deaths. A large proportion of NCDs are preventable by modifying risk factor levels. Effective prevention programmes and health policy decisions need to be evidence based. Currently, self-reported information in general populations or data from patients receiving healthcare provides the best available information on the prevalence of obesity, hypertension, diabetes, etc. in most countries. In the European Health Examination Survey Pilot Project, 12 countries conducted a pilot survey among the working-age population. Information was collected using standardized questionnaires, physical measurement and blood sampling protocols. This allowed comparison of self-reported and measured data on prevalence of overweight, obesity, hypertension, high blood cholesterol and diabetes. Self-reported data under-estimated population means and prevalence for health indicators assessed. The self-reported data provided prevalence of obesity four percentage points lower for both men and women. For hypertension, the self-reported prevalence was 10 percentage points lower, only in men. For elevated total cholesterol, the difference was 50 percentage point among men and 44 percentage points among women. For diabetes, again only in men, the self-reported prevalence was 1 percentage point lower than measured. With self-reported data only, almost 70% of population at risk of elevated total cholesterol is missed compared with data from objective measurements. Health indicators based on measurements in the general population include undiagnosed cases, therefore providing more accurate surveillance data than reliance on self-reported or healthcare-based information only. © The Author 2014. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
LENUS (Irish Health Repository)
Jonker, W R
2014-06-29
As part of the 5th National Audit Project of the Royal College of Anaesthetists and the Association of Anaesthetists of Great Britain and Ireland concerning accidental awareness during general anaesthesia, we issued a questionnaire to every consultant anaesthetist in each of 46 public hospitals in Ireland, represented by 41 local co-ordinators. The survey ascertained the number of new cases of accidental awareness becoming known to them for patients under their care or supervision for a calendar year, as well as their career experience. Consultants from all hospitals responded, with an individual response rate of 87% (299 anaesthetists). There were eight new cases of accidental awareness that became known to consultants in 2011; an estimated incidence of 1:23 366. Two out of the eight cases (25%) occurred at or after induction of anaesthesia, but before surgery; four cases (50%) occurred during surgery; and two cases (25%) occurred after surgery was complete, but before full emergence. Four cases were associated with pain or distress (50%), one after an experience at induction and three after experiences during surgery. There were no formal complaints or legal actions that arose in 2011 related to awareness. Depth of anaesthesia monitoring was reported to be available in 33 (80%) departments, and was used by 184 consultants (62%), 18 (6%) routinely. None of the 46 hospitals had a policy to prevent or manage awareness. Similar to the results of a larger survey in the UK, the disparity between the incidence of awareness as known to anaesthetists and that reported in trials warrants explanation. Compared with UK practice, there appears to be greater use of depth of anaesthesia monitoring in Ireland, although this is still infrequent.
Mosher, William; Bloom, Tina; Hughes, Rosemary; Horton, Leah; Mojtabai, Ramin; Alhusen, Jeanne L
2017-07-01
A substantial and increasing population of US women of childbearing age live with disability. Disability-based disparities in access to family planning services have been previously documented, but few studies have used population-based data sources or evidence-based measures of disability. To determine population-based estimates of use of family planning services among women 15-44 years of age in the United States, and to examine differences by disability status. This is a secondary analysis of a cross-sectional survey, the 2011-2015 National Survey of Family Growth. These analyses include 11,300 female respondents between the ages of 15 and 44 who completed in-person interviews in respondents' homes. Approximately 17.8% of respondents reported at least one disability in at least one domain. Women with disabilities were less likely than those without disabilities to receive services; the largest differences by disability status were seen among women with low education, low income, and those who were not working. Logistic regression analysis suggests that women with physical disabilities and those with poorer general health are less likely to receive services. Women living with disabilities reported lower receipt of family planning services compared to women without disabilities, but the differences were small in some subgroups and larger among disadvantaged women. Physical disabilities and poor health may be among the factors underlying these patterns. Further research is needed on other factors that affect the ability of women with disabilities to obtain the services they need to prevent unintended pregnancy. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Raggi Patrizio
2006-03-01
Full Text Available Abstract Background The cytological screening programme of Viterbo has completed the second round of invitations to the entire target population (age 25–64. From a public health perspective, it is important to know the Pap-test coverage rate and the use of opportunistic screening. The most commonly used study design is the survey, but the validity of self-reports and the assumptions made about non respondents are often questioned. Methods From the target population, 940 women were sampled, and responded to a telephone interview about Pap-test utilisation. The answers were compared with the screening program registry; comparing the dates of Pap-tests reported by both sources. Sensitivity analyses were performed for coverage over a 36-month period, according to various assumptions regarding non respondents. Results The response rate was 68%. The coverage over 36 months was 86.4% if we assume that non respondents had the same coverage as respondents, 66% if we assume they were not covered at all, and 74.6% if we adjust for screening compliance in the non respondents. The sensitivity and specificity of the question, "have you ever had a Pap test with the screening programme" were 84.5% and 82.2% respectively. The test dates reported in the interview tended to be more recent than those reported in the registry, but 68% were within 12 months of each other. Conclusion Surveys are useful tools to understand the effectiveness of a screening programme and women's self-report was sufficiently reliable in our setting, but the coverage estimates were strongly influenced by the assumptions we made regarding non respondents.
Directory of Open Access Journals (Sweden)
Maryam Bakhtyar
2016-06-01
Full Text Available The word love means romance, passion, fondness, and strong desire in willingness and friendship. The truth of love is indescribable and anyone who tries to suggest a definition for it and set boundaries for that is certainly naïve. In fact, love is the bounty of God, a donation to anyone God wishes. The origin of creation is centered on love and all the universe's emanations are originated from love. Given that mysticism is coming to an end getting through pure love and the Arif is in the last home, love is the most important issue to journey over the mysticism homes towards perfection. The current study triesto survey the meaning of love through the lives of two Islamic and Christian Arifs; Rabi'a al-Adawiyya and Catherine of Genoa. Rabi'a considers a true believer (Salik to be earnest no less thana true lover does not expect anythingfor himself/herselfand even does not expect an answer to get to the beloved. Catherina also believes a lover cannot share his/her love and the pure love is nothing but the love towards God.
Battaile, Brian; Jay, Chadwick V.; Udevitz, Mark S.; Fischbach, Anthony S.
2017-01-01
Increased periods of sparse sea ice over the continental shelf of the Chukchi Sea in late summer have reduced offshore haulout habitat for Pacific walruses (Odobenus rosmarus divergens) and increased opportunities for human activities in the region. Knowing how many walruses could be affected by human activities would be useful to conservation decisions. Currently, there are no adequate estimates of walrus abundance in the northeastern Chukchi Sea during summer–early autumn. Estimating abundance in autumn might be possible from coastal surveys of hauled out walruses during periods when offshore sea ice is unavailable to walruses. We evaluated methods to estimate the size of the walrus population that was using a haulout on the coast of northwestern Alaska in autumn by using aerial photography to count the number of hauled out walruses (herd size) and data from 37 tagged walruses to estimate availability (proportion of population hauled out). We used two methods to estimate availability, direct proportions of hauled out tagged walruses and smoothed proportions using local polynomial regression. Point estimates of herd size (4200–38,000 walruses) and total population size (76,000–287,000 walruses) ranged widely among days and between the two methods of estimating availability. Estimates of population size were influenced most by variation in estimates of availability. Coastal surveys might be improved most by counting walruses when the greatest numbers are hauled out, thereby reducing the influence of availability on population size estimates. The chance of collecting data during peak haulout periods would be improved by conducting multiple surveys.
Hong, Jae Won; Noh, Jung Hyun; Kim, Dong-Jun
2016-03-01
Although reducing dietary salt consumption is the most cost-effective strategy for preventing progression of cardiovascular and renal disease, policy-based approaches to monitor sodium intake accurately and the understanding factors associated with excessive sodium intake for the improvement of public health are lacking. We investigated factors associated with high sodium intake based on the estimated 24-hour urinary sodium excretion, using data from the 2009 to 2011 Korea National Health and Nutrition Examination Survey (KNHANES). Among 21,199 adults (≥19 years of age) who participated in the 2009 to 2011 KNHANES, 18,000 participants (weighted n = 33,969,783) who completed urinary sodium and creatinine evaluations were analyzed in this study. The 24-hour urinary sodium excretion was estimated using Tanaka equation. The mean estimated 24-hour urinary sodium excretion level was 4349 (4286-4413) mg per day. Only 18.5% (weighted n = 6,298,481/3,396,973, unweighted n = 2898/18,000) of the study participants consumed less the 2000 mg sodium per day. Female gender (P intake ≥50 percentile (P sodium intake, even after adjusting for potential confounders. Senior high school/college graduation in education and managers/professionals in occupation were associated with lower sodium intake (P sodium than those who were normotensive. However, those who receiving treatment for hypertension consumed less sodium than those who were normotensive (P sodium excretion. The logistic regression analysis for the highest estimated 24-hour urinary sodium excretion quartile (>6033 mg/day) using the abovementioned variables as covariates yielded identical results. Our data suggest that age, sex, education level, occupation, total energy intake, obesity, and hypertension management status are associated with excessive sodium intake in Korean adults using nationally representative data. Factors associated with high sodium intake should be considered in policy
Carroll, Rebecca I; Forbes, Andrew; Graham, David A; Messam, Locksley L McV
2017-09-01
Abattoir surveys and findings from post-mortem meat inspection are commonly used to estimate infection or disease prevalence in farm animal populations. However, the function of an abattoir is to slaughter animals for human consumption, and the collection of information on animal health for research purposes is a secondary objective. This can result in methodological shortcomings leading to biased prevalence estimates. Selection bias can occur when the study population as obtained from the abattoir is not an accurate representation of the target population. Virtually all of the tests used in abattoir surveys to detect infections or diseases that impact animal health are imperfect, leading to errors in identifying the outcome of interest and consequently, information bias. Examination of abattoir surveys estimating prevalence in the literature reveals shortcomings in the methods used in these studies. While the STROBE-Vet statement provides clear guidance on the reporting of observational research, we have not found any guidelines in the literature advising researchers on how to conduct abattoir surveys. This paper presents a protocol in two flowcharts to help researchers (regardless of their background in epidemiology) to first identify, and, where possible, minimise biases in abattoir surveys estimating prevalence. Flowchart 1 examines the identification of the target population and the appropriate study population while Flowchart 2 guides the researcher in identifying, and, where possible, correcting potential sources of outcome misclassification. Examples of simple sensitivity analyses are also presented which approximate the likely uncertainty in prevalence estimates due to systematic errors. Finally, the researcher is directed to outline any limitations of the study in the discussion section of the paper. This protocol makes it easier to conduct an abattoir survey using sound methods, identifying and, where possible, minimizing biases. Copyright © 2017
Directory of Open Access Journals (Sweden)
Béranger Lueza
2016-03-01
Full Text Available Abstract Background The difference in restricted mean survival time ( rmstD t ∗ $$ rmstD\\left({t}^{\\ast}\\right $$ , the area between two survival curves up to time horizon t ∗ $$ {t}^{\\ast } $$ , is often used in cost-effectiveness analyses to estimate the treatment effect in randomized controlled trials. A challenge in individual patient data (IPD meta-analyses is to account for the trial effect. We aimed at comparing different methods to estimate the rmstD t ∗ $$ rmstD\\left({t}^{\\ast}\\right $$ from an IPD meta-analysis. Methods We compared four methods: the area between Kaplan-Meier curves (experimental vs. control arm ignoring the trial effect (Naïve Kaplan-Meier; the area between Peto curves computed at quintiles of event times (Peto-quintile; the weighted average of the areas between either trial-specific Kaplan-Meier curves (Pooled Kaplan-Meier or trial-specific exponential curves (Pooled Exponential. In a simulation study, we varied the between-trial heterogeneity for the baseline hazard and for the treatment effect (possibly correlated, the overall treatment effect, the time horizon t ∗ $$ {t}^{\\ast } $$ , the number of trials and of patients, the use of fixed or DerSimonian-Laird random effects model, and the proportionality of hazards. We compared the methods in terms of bias, empirical and average standard errors. We used IPD from the Meta-Analysis of Chemotherapy in Nasopharynx Carcinoma (MAC-NPC and its updated version MAC-NPC2 for illustration that included respectively 1,975 and 5,028 patients in 11 and 23 comparisons. Results The Naïve Kaplan-Meier method was unbiased, whereas the Pooled Exponential and, to a much lesser extent, the Pooled Kaplan-Meier methods showed a bias with non-proportional hazards. The Peto-quintile method underestimated the rmstD t ∗ $$ rmstD\\left({t}^{\\ast}\\right $$ , except with non-proportional hazards at t ∗ $$ {t}^{\\ast } $$ = 5 years. In the presence of treatment effect
Olive, Marie-Marie; Grosbois, Vladimir; Tran, Annelise; Nomenjanahary, Lalaina Arivony; Rakotoarinoro, Mihaja; Andriamandimby, Soa-Fy; Rogier, Christophe; Heraud, Jean-Michel; Chevalier, Veronique
2017-01-01
The force of infection (FOI) is one of the key parameters describing the dynamics of transmission of vector-borne diseases. Following the occurrence of two major outbreaks of Rift Valley fever (RVF) in Madagascar in 1990–91 and 2008–09, recent studies suggest that the pattern of RVF virus (RVFV) transmission differed among the four main eco-regions (East, Highlands, North-West and South-West). Using Bayesian hierarchical models fitted to serological data from cattle of known age collected during two surveys (2008 and 2014), we estimated RVF FOI and described its variations over time and space in Madagascar. We show that the patterns of RVFV transmission strongly differed among the eco-regions. In the North-West and Highlands regions, these patterns were synchronous with a high intensity in mid-2007/mid-2008. In the East and South-West, the peaks of transmission were later, between mid-2008 and mid-2010. In the warm and humid northwestern eco-region favorable to mosquito populations, RVFV is probably transmitted all year-long at low-level during inter-epizootic period allowing its maintenance and being regularly introduced in the Highlands through ruminant trade. The RVF surveillance of animals of the northwestern region could be used as an early warning indicator of an increased risk of RVF outbreak in Madagascar. PMID:28051125
Metallicity estimates for A-, F-, and G-type stars from the Edinburgh-Cape blue object survey
Beers, T C; O'Donoghue, D; Kilkenny, D; Stobie, R S; Koen, C; Wilhelm, R
2000-01-01
The Edinburgh-Cape Blue Object Survey is an ongoing project to identify and analyse a large sample of hot stars selected initially on the basis of photographic colours (down to a magnitude limit B~18.0) over the entire high-Galactic-latitude southern sky, then studied with broadband UBV photometry and medium-resolution spectroscopy. Due to unavoidable errors in the initial candidate selection, stars that are likely metal-deficient dwarfs and giants of the halo and thick-disk populations are inadvertently included, yet are of interest in their own right. In this paper we discuss a total of 206 candidate metal-deficient dwarfs, subgiants, giants, and horizontal-branch stars with photoelectric colours redder than (B-V)o = 0.3, and with available spectroscopy. Radial velocities, accurate to ~10-15 km/s, are presented for all of these stars. Spectroscopic metallicity estimates for these stars are obtained using a recently re-calibrated relation between Ca II K-line strength and (B-V)o colour. The identification of...
Galgon, Anne K; Shewokis, Patricia A
2016-03-01
The objectives of this communication are to present the methods used to calculate mean absolute relative phase (MARP), deviation phase (DP) and point estimate relative phase (PRP) and compare their utility in measuring postural coordination during the performance of a serial reaching task. MARP and DP are derived from continuous relative phase time series representing the relationship between two body segments or joints during movements. MARP is a single measure used to quantify the coordination pattern and DP measures the stability of the coordination pattern. PRP also quantifies coordination patterns by measuring the relationship between the timing of maximal or minimal angular displacements of two segments within cycles of movement. Seven young adults practiced a bilateral serial reaching task 300 times over 3 days. Relative phase measures were used to evaluate inter-joint relationships for shoulder-hip (proximal) and hip-ankle (distal) postural coordination at early and late learning. MARP, PRP and DP distinguished between proximal and distal postural coordination. There was no effect of practice on any of the relative phase measures for the group, but individual differences were seen over practice. Combined, MARP and DP estimated stability of in-phase and anti-phase postural coordination patterns, however additional qualitative movement analyses may be needed to interpret findings in a serial task. We discuss the strengths and limitations of using MARP and DP and compare MARP and DP to PRP measures in assessing coordination patterns in the context of various types of skillful tasks. Key pointsMARP, DP and PRP measures coordination between segments or joint anglesAdvantages and disadvantages of each measure should be considered in relationship to the performance taskMARP and DP may capture coordination patterns and stability of the patterns during discrete tasks or phases of movements within a taskPRP and SD or PRP may capture coordination patterns and